Keywords

First-year students’ understanding of academic integrity (AI) is often unevenly distributed, unsophisticated, or overconfident (e.g., Brooks et al., 2011; Childers & Bruton, 2016; Howard, 1995; Jurdi et al., 2012; Locquiao & Ives, 2020; Newton, 2015; Power, 2009; Roig, 1997; Wilkinson, 2009), and undergraduate students more generally show poor comprehension and/or uptake of AI, resulting in misconduct (e.g., Colella-Sandercock & Alahmadi, 2015; Dawson, 2004; Christensen Hughes & McCabe, 2006). Until recently, conversations about AI in higher education in response to these issues have taken a default model of deficiency and distrust, focusing on detection rather than education, and perpetuating assumptions about who commits misconduct (such as international students) without considering the systemic issues and biases that might account for disproportionate representation of those populations in misconduct cases.

This chapter shares findings of and recommendations from a three-year initiative at the University of British Columbia, Canada, to develop and assess enhanced and explicit instruction in academic integrity in first-year writing courses, an enterprise that now involves 42 faculty members teaching about 5000 students each year. This project began from the appreciation that, as an institution, we needed to close the gap between our expectations of academic integrity and students’ understanding of those expectations, and to make explicit what is often treated as assumed understanding. This approach was intended to help students develop more robust knowledge and appreciation for AI as a core element of the academic community to which they now belong, and to advocate for pedagogical rather than punitive frameworks that support students as members of the academic community.

By outlining the design and implementation of a major project to change how academic integrity has been taught in a particular set of courses, with the broader goal of advocating for changes in undergraduate (and graduate) education that will help cultivate a “culture of integrity” (Eaton & Edino, 2018, p. 1), this discussion adds to existing literature on undergraduate understanding of AI and on pedagogical approaches to teaching it. I illustrate how what I call the “pedagogies of integrity” that we have developed and adopted in this project have led to improved uptake by students (and instructors) of AI as both theory and practice, resulting in a change in the number as well as type of academic misconduct cases, and have led to significant insights about the place of AI in larger conversations about student belonging, wellness, and access. In its attention to faculty experiences and insights, it addresses a gap in AI scholarship identified by Eaton and Edino (2018), and extends the considerations of AI institutionalization that Bertram Gallant and Drinan outline (2008). I provide an overview of the structure and organization of our project (its infrastructure, staffing, and funding), our practices, and our major findings. I conclude with next steps for developing pedagogies of integrity beyond first-year writing courses at UBC, and how these discussions are resonating across disciplines and faculties of our campus and beyond.

Starting Our “Hearts”: Project Background

This project stems from my own “lightbulb” moment when, after almost two decades as an instructor in writing and English literature courses, I moved into an administrative role, one in which I had to meet with students for cases of alleged academic misconduct reported by their instructors. From this perspective—a step removed from the emotional aspects that a discovery of misconduct can provoke for the instructor—I had the opportunity to listen to and learn from our students about the ways we as an institution were clearly not doing a very good job of making academic integrity either understandable or desirable: the students in these meetings not only did not know how to meet the expectations of ethical research, they did not have much idea of why we cared about it so much, or, why they themselves should care, too. Instead, more often than not, we were taking the unproductive approach Rebecca Moore Howard (1992, 1995) and Cheryl Kier (2014) each categorize as punishing students for knowledge they did not have. I recognized that we were expecting students, even first-years, to know and understand how to apply a concept that even seasoned scholars sometimes struggle with. We weren’t teaching it right, or in enough depth, yet we attached such weight to it—using it as a measure not of aptitude but moral fibre: student plagiarists are not typically thought as “bad citers,” but as cheaters, or, as Mary Mulholland (2020) notes, as “dishonest” (p. 111) and “unethical” (p. 105).

Faculty, like students, can also have “teachable moments,” and for me, this was mine. It had become clear that this “punitive” rather than “proactive” approach, in Sarah Elaine Eaton et al.’s words (2017, p. 29–30), was not the only option. We had opportunities to move from the default model of blaming and shaming that Ho (2015), Mulholland (2020) and others argue characterizes higher education’s AI approach, with its dominant “judicio-moral paradigm” (Howard, 1992, p. 235), and see AI as a skill and way of knowing that—like all other concepts we think of as foundational to learning in our courses—we can and need to teach, explicitly, and with recognition of its complexity. With this new insight, and in collaboration with a similarly-minded Associate Dean Academic, I initiated a pilot project in 2016 that brought together eight full-time faculty members (both both tenure-track and lecturers, colleagues with multi-year contracts) who were teaching in Arts’ First-Year Programs (FYP) to think with me about what we could do differently in our courses so that our students not only knew how to meet the expectations of academic integrity (itself a major learning curve), but also why they should care to do so, beyond just avoiding getting caught for violating it. Adopting an educative approach, framed by integrity, how could we equip students with the skills they need to meet the expectations of ethical knowledge production and a compelling rationale for doing so?

To support our shifting from a moralistic and affective approach to academic integrity to a theoretically-informed, evidence-based, and pedagogical one, we began with research into the state of misconduct in our courses. I reviewed the investigations I’d undertaken into reported cases to identify patterns in causes of misconduct, and held several workshops with FYP faculty to learn what “pain points” they were identifying (unsurprisingly, these skills included paraphrasing and citation, aptitudes commonly identified as challenging for students, as well as general research and note-taking practices (e.g., Colella-Sandercock & Alahmadi, 2015), as well as how they were teaching (or not teaching) the topic. To understand how other North American research universities were approaching this issue and identify best (and worst) practices as well as existing resources on which we might draw, we undertook an extensive literature and policy review. Having identified the most urgent gaps in student knowledge in our courses, and with a developing sense of the scholarly and policy conversation, we competed for funding from UBC’s Teaching and Learning Enhancement Fund to support a larger initiative to implement and assess the effects of explicit and enhanced instruction on academic integrity in our first-year writing courses. Entitled “Our Cheating Hearts?: Changing the Conversation Through Academic Integrity Curriculum”—with the question mark signaling our interest in challenging the normative discourse about academic misconduct, who commits it, and why—the project was awarded $122,707 CAD in funding over three years. This funding included support for one teaching release in each of the first two years (requiring matching support from First-Year Programs), each taken by junior faculty members who were instrumental members of the working group and who had the heaviest teaching loads (this release allowed them to lead portions of the design and assessment components, including facilitating the focus groups) and renumeration and refreshments for student participants in focus groups. The majority of the funding was dedicated to hiring a project coordinator (full-time in the first two years, part-time in the last year) and graduate and undergraduate students in part-time work-learn positions. This staff support looked after data collection, cleaning, and analysis; meeting and workshop organization; poster and slide design; consultation on process (e.g., redesign of survey questions); draft reporting; and general troubleshooting. Given that I as principal investigator not only teach several classes a year but am a full-time administrator, this project support was essential to the project’s success. Notably, however, we have been significantly under-budget throughout the project’s tenure. Though certainly this illustrates good stewardship and judicious spending, it also demonstrates that similar projects could be done with much less investment, especially if in-kind support could be provided by the institution.

First-Year Programs courses—in the Arts One (100 students) and the Coordinated Arts Program (550 students) cohort learning communities, and in WRDS 150, a 13-week academic writing class (2020: 193 sections / 5628 students)—were well-situated for this project, since our curriculum was already implementing many of the best practices other scholars have identified as ideal for student understanding of ethical research (e.g., Childers & Bruton, 2016; Colella-Sandercock & Alahmadi, 2015; Eaton et al., 2017). For example, our courses include introductions to the theory and practice of citation, documentation, reporting expressions, summary and synthesis of sources, and assignments are scaffolded. Yet, as we charted causes of academic misconduct (both in “teachable moments”—issues at an early stage in the course, addressed directly with the student—and reportable cases), we noted the need for better grounding in core citation and research skills. In disciplinary meetings with me, students also indicated that they were getting so stuck on minor details of practice (e.g., how to cite a particular kind of source) that they missed the larger point of documentation; this experience reflects similar findings that students focus on the “mechanistic” elements of citation without consideration of AI more broadly (Brooks et al., 2011; Childers & Bruton, 2016; Howard, 1995; Newton, 2015). This pattern, as well as others that emerged in the misconduct meetings, highlighted that we needed not only to reinforce our instruction on how to meet the expectations of academic integrity, but—crucially—we were not being explicit enough about why it is important and has meaning to students themselves.

To evaluate the effect of the project, we surveyed students and faculty, initially in two groups (“working group” [WG] and “non-working group” [NWG]), and after the full-scale implementation in September 2018, without such division. We held follow-up interviews with faculty and focus groups with students and peer tutors, and presented findings for discussion at FYP meetings. A final series of focus groups and surveys planned for April 2020 was postponed due to the pandemic. We also tracked the number and type of misconduct cases reported to the FYP Chair.

Considerations and Project Principles

The question of making integrity meaningful and relevant presented a particular and additional challenge because our courses meet the first-year writing requirement, meaning that many students take them not by choice but under duress, and see a writing course as quite separate from the “real” work of their other courses and intended major or profession. If they associate “academic integrity” as a concern particular to that course, rather than a value and practice commonly-held across the university, then it becomes even more difficult for them to apply these principles to all of their work. Having identified these challenges, considerable as they were, we now had opportunity to address them. As we moved from the pilot to the first year of funding, we built on the following premises:

  • We shifted our language to “academic integrity,” what we aspired to rather than what we would punish or want to avoid. We saw this change in wording as not merely semantic but a commitment to a set of principles, and that resonated with students and faculty. For example, a student focus group participant noted in 2017, “[the term academic integrity]... gives people something to live up to. Cheating is just like, don’t cheat, but then there are still a lot of things you could do that are like, not cheating but they’re not exactly OK either.” A respondent to our 2018 faculty survey (n = 18) noted not only their perception that student knowledge was different with the new approach but that it was also a lot more palatable: “I found that the focus on academic integrity—rather than misconduct—helped me reframe all this in a more positive light. Rather than making them fearful that they might accidentally do something wrong, it gives them something positive to aspire to.”

  • We recognized that students come to our courses with understandings of “academic integrity” that are unevenly distributed, often unsophisticated, and typically overconfident (Brooks et al., 2011; Childers & Bruton, 2016; Howard, 1992, 1995; Locquiao & Ives, 2020; Newton, 2015; Power, 2009), and therefore we should not assume any common understanding of either the concept or knowledge of how to apply it. Further, although this characterization is particularly true of first-year students (as Wilkinson, 2009, has similarly found), it is not exclusively so, and so we should not assume that any members of our class would not benefit from explicit instruction in these expectations. In this approach, we aligned with the principles of Universal Design for Learning: accommodations for some members of a class result in improved learning for all (e.g., Scott et al., 2003). This recognition supports our shift from blaming students for what we are not teaching them, and counters dominant attitudes that students commit misconduct from wilfulness and dishonesty more often than ignorance (a finding not supported by our study of reportable cases in FYP). It also models an ethical pedagogy that addresses the needs of a diverse student body.

  • Given that our curriculum already addressed many aspects of research and its production, we would focus on ways to extend those existing parts of the curriculum and make the instruction more explicit, with greater development of the rationale (the why) and targeted instruction in elements of application (the how). Since the intention was that, over the three years of the project, this enhanced curriculum would be included in all sections of our courses (at that time, with a combined enrolment of 2700 students), scalability and faculty buy-in were key considerations. Recognizing the significant new work this curriculum redesign involved, and the high number of contract sessional colleagues teaching in our units for whom such additional labour would be unpaid (and, as Ho argues, for whom such work can be “burdensome,” 2015, p. 737), we determined to develop resources and materials that would be shared and that other instructors could adopt and adapt; we added a “sandbox” site for all FYP instructors on Canvas (UBC’s learning management system) on which members could upload and access the exercises and materials we developed. This site has been an essential starting place for faculty (in our 2018 (n = 18) and 2019 (n = 17) faculty surveys, all instructors report using it), and is now mirrored in an open-access wiki hosted by UBC’s Chapman Learning Commons.

Project Findings: Strategies for Building and Maintaining an AI Infrastructure

We accumulated a rich body of experiences and data from this project, and from them I outline the following seven key strategies that I argue were key to the success of creating an AI culture.

Get Faculty on Board

Faculty understanding of and attitudes towards academic integrity play an essential role in maintaining a proactive and educative AI culture (Childers & Bruton, 2016; Colella-Sandercock & Alahmadi, 2015; Löfström et al., 2015; Evans-Tokaryk, 2014; Brooks et al., 2011; Bertram Gallant & Drinan, 2008; Wang, 2008; Christensen Hughes & McCabe, 2006). Similarly, it was clear that our project would not succeed without faculty being able to perceive that it met their needs as well as their students. The ground-up approach we have taken, driven by instructors, based on practices in our own classrooms and our understanding of the needs of students in our courses, has led to widespread buy-in. In addition to taking a collaborative approach to curriculum development, other aspects of the project approach have helped us avoid the push-back that might be associated with curricular changes imposed from the administration. At the outset, from my position as both principal investigator and Chair of these units, as I considered the broader scale-up to all sections/course in FYP, I grappled with the challenges of how to implement this new expectation across not only a significant number of sections and instructors, but also different courses: how could we negotiate concerns about academic freedom and instructor autonomy, as well as the different cultures of the three distinct programs comprising FYP? I determined that it was more important that academic integrity (the aspirational value and practice) be taught in these courses, rather than mandate exactly how instructors did so. In other words, I aimed to “change the conversation” faculty were having with each other and in their courses, switching from misconduct to integrity, and taking up the responsibility of teaching what this term means, and I recognized that what that looked like might end up being a bit different in both scope and content. Our gradual implementation—a small number of sections and faculty in the pilot and first year—allowed us a long runway to gain cooperation, including the ability for the working group to report back to the wider group on the successes (and challenges) of the new approach.

By the time we were asking all faculty to participate, they were familiar with its premises, were provided with a “toolkit” to adapt, and presented with fairly persuasive findings that, even with relatively small changes to our curriculum, we could see significant differences in students’ awareness and understanding of AI. Surveys run in October 2017 of students in the working group (n = 86) and non-working group (n = 61) sections were particularly compelling, with three questions in particular showing the project’s promise: when asked if they had heard of the term “academic integrity” and that they knew what it meant, 100% of students in the working group (WG) had heard of the term AI and only 3.9% indicated that they were unsure of its meaning, while in the non-working group (NWG), 96% had heard of it but 19.6% were unsure of what it mean and 3.6% had never heard of it. Similarly, 83% of WG students responded that they knew about UBC’s AI policy, in comparison to 50% of NWG. 93.5% of WG agreed or strongly agreed “the importance of AI is clearly communicated to students,” versus 67.3% of NWG respondents; 32.7% of NWG students chose neutral or disagree to this statement, in comparison to 6.5% in the WG. Qualitative comments from student focus groups in November 2017 (WG = 3, NWG = 4) also indicated that WG students articulated a better understanding of AI as supporting the collective enterprise of the academic community versus the NWG’s focus on individual effort.

Thus the change was not only feasible but highly productive for students and also for faculty, who would face fewer instances of academic misconduct. In the end, we were able to roll out implementation to all sections a full year ahead of schedule. Although the flexible versus standard approach does mean less certainty about uptake by individual faculty and potentially inconsistency in the scope of instruction, extended and explicit instruction in academic integrity, through an educative framework, is now a regular part of the curriculum in all three first-year programs.

Clarify Policy and Procedure

Part of the activity of building a culture of integrity in FYP was happening outside of the working group and curriculum design: it began with the very idea that academic integrity was a key and explicit value of our units, and that came with the expectation that faculty had an important role to play, and needed to participate in this shared enterprise. To do so, we needed a common understanding of policy and procedure, including when to report academic misconduct and how, since these practices were poorly articulated and inconsistently applied, and because we had a new organizational structure (the introduction of an FYP Chair in 2014). There was some initial reluctance or concerns by faculty that heightened attention to AI was in fact a commitment to a disciplinary or “law and order” approach rather than an educative one; the instructors of the WRDS course in particular drew on the work of Rebecca Moore Howard and others to defend patchwriting as developmental (Howard, 1992, 1995). These conversations helped push the conversation across FYP productively towards the theoretical framework the “Cheating Hearts” project had adopted, and identified an issue for us in conforming to Faculty of Arts’ policy and procedure for reporting academic misconduct: in our first-year courses, with students new to the expectations of research writing and university practices, when was a “case” reportable, rather than a “teachable moment”? A sub-committee, with representation from the three FYP units, led a year-long process to produce clear guidelines that reflected faculty input and consensus about these elements. This process was invaluable in supporting the sea-change in our unit and laid the ground work for the pedagogical changes being developed.

Establishing AI Frameworks in Our Courses—Syllabus Language

In the pilot year, I identified our articulation of course policies in the syllabus as a low-hanging fruit ripe for signalling our new approach to AI. Adapting a practice James Orr articulates (2017), I created a course policy statement on AI that used the aspirational language of integrity—i.e., not on “cheating,” misconduct,” or “plagiarism”– and connected this concept to academic purpose and community, extending the finding by Löfström et al. (2015) that “integration into the academic community serves to prevent research misconduct” (p. 435).1 Further, the statement takes an explicit and educative approach by clearly outlining examples of violations of AI, accurately noting the consequences for such violations, and linking to resources and materials students can consult to know more, including university policy documents and library guides, so that they know where to find the support and information they need to meet this community standard.

After widespread use by instructors in the project working group, this statement has been adopted at the unit level for First-Year Programs, and so appears on most syllabi in these units. Several instructors in the working group also embedded integrity in the syllabus by including a learning outcome and an evaluation criterion on ethical research practices. The explicit outlining of expectations reflects our premise that we do not assume that “everyone” knows about these expectations or how to meet them.

Explicit and Early Integration of AI in Course Content: The Definition Activity

We knew from experience that simply including the statement—no matter how intentionally designed—would not be sufficient for its uptake, even if we spent time in the first days of class discussing that statement, the practice reported by the majority of faculty in our non-working group (2017), and a recommendation frequently made in the literature on misconduct (e.g., Colella-Sandercock & Alahmadi, 2015; Wang, 2008). In the pilot year of the project, I designed a definition exercise2 to foster this engagement and give students a clear understanding of what we mean, clarity that too often both policy and instructors fail to provide (e.g., Brooks et al., 2011; Jurdi et al., 2012). In this no- or low-stakes activity, students are assigned readings related to academic integrity (including materials from the syllabus statement, such as the UBC Calendar and library guides, institutional policy for researchers on ethical practice, and a popular article on some current instance of misconduct, such as Melania Trump’s alleged plagiarism in 2016 of a speech by Michelle Obama). In class, they work with peers in small groups to produce a definition of AI based on these readings that must articulate not only what it means, but why it matters. After the class reviews these different definitions to identify the one or ones they find most accurate, we craft a composite definition of the concept that is posted on our course LMS page and referred to in the expectations for each of our formal assessments. We then revisit the definition at two points, mid-semester and just before the final research assignment, to reflect on how our ideas about integrity have changed, and to add any new insights or practices that they have subsequently understood. Through this collaborative process that requires personal investment and that reflects the particular community, creating a kind of group agreement or class integrity charter, students begin to take ownership of this concept and to establish it as a common value, an uptake I often see in my classes when students nudge each other during peer review about citation.

This activity is scheduled very early in the semester—often in the first sessions—and helps set up AI as the framework for the entire course. It requires students to review institutional policies and resources so that they know what they say and where to find them, and it allows them to confirm their understanding of these documents through working first with their peers and then as a class in conversation with the instructor. This opportunity to ask questions is essential: students see that working with AI takes effort—for all researchers. It is not something “everyone” already learned in high school, and it has complexities and nuances reflecting the array of research and professional practices in which it is applied and about which we can learn, together. Instructors can share their own experiences of difficulty in this area, from slip-ups while they were in university to issues in their own research (for example, I talk about my misreadings of “common knowledge” when I have published outside my field). In my own sections, we also typically produce our first “teachable moment,” because, in their definitions, no groups ever cite their sources, and when we point this out, we can have a light-hearted reflection on collective failure and reset our practices.

Embed AI Learning Throughout the Course

We learned that, for AI to “live” as a concept and practice beyond the first couple of weeks, our explicit and enhanced instruction about the expectation and how to meet it needs to be a consistent thread throughout the course, and, ideally, integrated into the scaffolding for each formal assessment. The understanding of and ability to apply AI principles are dynamic aptitudes that continue to develop alongside “core” content. Gaps in comprehension will emerge over the semester and opportunities to ask questions can address not only frustrations but also help instructors learn their own assumptions about what is common knowledge. For example, I finally realized my students were failing to properly document online journal articles not out of duplicity, but because they didn’t know where to find these sources in citation guides: the MLA category “scholarly articles in an online database” assumes that users already have a firm grounding in the language and infrastructure of research. Iterative instruction of AI also recognizes that different applications or situations will introduce complexities that students will need explicit help to navigate. This requirement will be particularly urgent in contexts that don’t look like traditional assignments (e.g., a formal paper assignment or exam), perhaps because students are still internalizing the value and appreciating its significance outside of “schoolroom” rules and also because they may not yet associate novel assignments as additional forms that academic research can take. In my own courses, for instance, students typically stumble when they write their first blog post, forgetting to cite or link to sources, and not providing image credits, even though this assignment comes right after the definition activity. Since it is their first assessment for grades (upping the stakes) and is in a genre that most have not produced before, and that they associate with non-academic contexts, they don’t know how to meet the expectation, or that they should. Although I include an explicit evaluation criteria about ethical research practices on every assignment, it clearly is something that needs not only reinforcement but opportunity for clarification.

With appreciation for this learning curve, instructors would incorporate opportunities for students to think together about what academic integrity will look like and require in each assignment, particularly those “untraditional” assessments: what might make it challenging to meet expectations in this particular application? What are solutions or strategies to address those challenges? For example, how do we cite sources in an oral presentation, or in genres such as websites or videos that typically don’t document research in the same ways a formal paper might? What about collaborative projects—work that as Löfström et al. note, presents a “key academic integrity issue” about which instructors themselves may be “collectively confused” (2015, 9)? Similarly, instructors have noted issues—even before 2020’s pandemic-related “pivot” to digital teaching—with online assessments such as midterms or quizzes that are being done together when they are not supposed to be. Assessment design can support and embed AI—for instance, implementing project reflections (in which group members outline what each person contributed) or “open-book” and explicitly collaborative online tests, but for students to develop their own savviness about and toolkit for ethical practices, we also need to involve them as partners in explicit conversation and problem-solving. Applications that illustrate “grey” areas or complexities of AI can be particularly productive to puzzle through together as a way to deepen both student and instructor understanding. As an FYP faculty member noted in the Fall 2018 instructor survey (n = 18), “Students went in thinking they knew what academic misconduct was but found (because the examples were borderline, complicated, unexpected etc.) that this was something they actually needed to learn about.”

Reinforcing the Relevance of AI Beyond the Classroom

In addition to this attention to the “how” of academic integrity, we have deepened the discussion of the “why” by inviting students to consider what a commitment to working with integrity does in particular disciplines and professions. What are the consequences for us, in this class and the field it represents, of not doing our work with integrity? What harms will be done? For example, we might ask them to consider (in a class discussion, small group work, or individual reflection) why it matters if a psychology scholar falsifies data in a research publication, or a sociologist fails to protect the identities of community partners who have shared sensitive information, or a medical student copies answers on an exam. In “Teaching Integrity,” John Dichtl (2003) similarly outlines the value of having students connect classroom and professional practice. He argues that instructors’ discussions of integrity expectations need to take place “inside and outside the classroom, and be expanded outward to include conversations about the work of professional historians,” including the American Historical Association’s “Statement on Standards of Professional Conduct” (p. 369). Similarly, an FYP instructor surveyed in 2018 suggested that students:

Read the Tri-Councils’ guidelines. Make a distinction for students between writing in most university classrooms where one must do one's own writing, and writing as a professional where there is often the availability of editors and others who can assist with revisions, etc., for both those whose first language is English and those for whom it is not. Much “real-world” writing involves boiler-plating, collaboration, copy-editing, etc.

 

This connection to professional standards and practices that Dichtl and the FYP instructor recommend is another way to focus on discipline-specific commitments to integrity (codes of conduct, ethics declarations) and helps shift the emphasis from consequences of cheating—a kind of schoolroom concern—to consequences of error, to thinking about the implications and risks of unethical research, because the work we do as scholars contributes in real ways to how the world works. There is harm that can be done. By framing academic integrity in connection to their scholarly identities—as members of particular discourse and research communities they now identify with—we lay the foundation for them to see AI as personally and collectively relevant and consequential.

Recognize AI as “Hidden Curriculum”

This project has necessarily also involved a shift in faculty attitudes and an understanding of the potential for our AI instruction to more broadly cultivate belonging for more students. The work we do in making explicit our expectations of AI—and the steps by which one meets those expectations—has become part of a larger effort to challenge the “hidden curriculum” that reflects and reinforces inequities and access in higher education. Conversations about AI, or more typically about misconduct, illuminate the many other, related knowledges about higher education and its practices that too often we assume are shared. My work both as an administrator and on this project has helped me understand that too often, violations of AI are “canary in the coalmine” moments for students who are struggling, often because of systemic inequities that undermine their sense of belonging and their understanding of “how to university.” Many of the students I have interviewed for alleged academic misconduct ended up in disciplinary meetings because they did not understand how the university works: they didn’t know they could ask for an extension, for example, or take a late penalty. Others were in significant personal crisis and did not know about campus resources or perhaps—more troublingly—they did not feel that they mattered enough to the institution to take advantage of such resources.

From the pilot year of “Cheating Hearts” on, we have extended our educative framework to connect explicit AI instruction to explicit discussions about reasons why students may struggle to meet these expectations, and the options and resources available to them to ease such struggles. As the project has continued, however, I have argued—within FYP and beyond—that we have a duty to be much more explicit about what we are asking students to do, and why, to normalize asking questions, and to check our assumptions about what we expect that “everyone already knows.” Creating a framework of integrity seems to have had the additional benefit in our courses of encouraging students to talk to their instructors, giving us an opportunity to connect with and support them. A member of the 2017 working group (n = 7) noted a shift in the number and kind of these interactions: “if they are struggling with citation and issues of academic integrity, they tend to put it on the table, which is something that I've never seen before…They're extremely open about their struggles in general … I find it really refreshing.”

“Cheating” Lessons: Overall Take-Aways

Although a final round of assessments planned for the “Cheating Hearts” project’s scheduled conclusion in April 2020 has been delayed to the pandemic, we have met our major goals, so that, by 2018, all FYP courses and sections now include at least some explicit instruction on academic integrity and teach how to meet these expectations. Our 2017 pre-project surveys of working group (n = 7) and non-working group faculty (n = 10) document this change: working group faculty reported that “I didn't do anything with academic integrity in previous years” and “This was the first time we discussed it openly as a seminar,” while the majority of non-working group instructors reported only discussing policy, early in the semester, and providing class time on avoiding plagiarism later in the course.

Significantly, the changes we made were transformative but actually quite small in scope and, as intended, built on our existing course content. The syllabus statement and variations on the definition activity were the most commonly used materials, as well as additional readings, paraphrase activities, discussions of patchwriting, discussion of student pressures and why students plagiarize, and quizzes. Instructors in 2018 (n = 18) and 2019 (n = 17) surveys reported that they spent 2–3 additional classes dedicated to introducing AI than they did before. Although we made space for this content in the courses (and addressed other pedagogical imperatives) by eliminating final exams, faculty continue to report that time is a continuous constraint. An initiative I led in 2020 to create an online “Introduction to academic integrity” module, embedded in the UBC orientations program Jump Start and available for instructors to use in any course, may give instructors a way to “flip” some of the preliminary grounding in this concept. (This module launched in August 2020, and has been used in undergraduate and graduate courses.)

One convincing point of data has emerged in the number and type of misconduct cases. While through 2018 the number of cases remained consistent with past years—unsurprisingly, given the greater scrutiny and expectation on faculty to report—in 2019, only five cases were reported to the Chair. Of those, three were deemed as minor infractions (patchwriting), and two were sent to the Dean’s office as indicating academic dishonesty. In past years, the vast majority of cases reflected accidental misconduct, resulting from a genuinely poor understanding of expectations, or misconduct arising from students in crisis who made poor choices under exceptionally challenging circumstances—two groups that, ideally, would receive education and resources without having to come to the Chair’s office. Given how acutely stressful a misconduct meeting is (for faculty, but particularly for students), this change in the demographics of reported cases is deeply gratifying.

Our “Hearts” Will Go on: Spreading the Conversation

This project has attracted intense interest from faculty and staff across UBC, with group members invited to create workshops and presentations on teaching with integrity in departments and units across campus and at other local institutions. These connections build out the conversation and reflect an increasing appetite to learn new ways to cultivate this foundational value and concept. As FYP’s project illustrates, “changing the conversation” takes significant effort that benefits from collaboration to share the load. In taking up AI through an educational framework, UBC will need to make an ongoing commitment to invest—literally and figuratively—in the infrastructure this work requires so that we “achieve institutionalization,” the fourth and final stage in Bertram Gallant and Drinan model of AI implementation (2008, p. 4). As this project wraps up, I have identified the following ways we need to keep changing the conversation about AI at UBC and beyond.

Incorporate AI Throughout the Degree

Our study was tied to first-year writing courses, representing a course and year level that too often is considered the default and only place that AI is be taught. AI instruction is not the sole responsibility of “composition” nor can it remain exclusively co-curricular, featured in orientations programming or library skills workshops. Although these additional learning contexts are crucial for reiteration and reinforcement, AI instruction needs to be a shared element of all curriculum: ideally, students would talk and learn about the expectations and practices of AI in every course they take, including senior-level classes designed for majors and in graduate work, since these students also come to our courses with gaps in their knowledge—from differences in culture, discipline, and /or training—and presumably the shame of “not knowing” will be even more keenly felt by those in advanced courses. An institution-wide and coherent program of AI instruction scaffolded to address increasing complexity and particular nuances, and supported by level-appropriate resources (e.g., library and learning centre) would more effectively foster a culture of integrity and allow all students access to meet these expectations.

Clarity and Consistent Application of Policy

This “integrity across the curriculum” approach should be buttressed by clear, student-centered policy that is consistently applied. As Sarah Elaine Eaton (2017) notes in her study of Canadian university policies on plagiarism, including UBC’s, too often these documents speak about violations and misconduct in quite generic ways that, she argues, do little to support consistent understanding and uptake of AI practices by both faculty and students (278–9). Studies of student perceptions of AI point to the crucial need for consistent uptake and application of institutional policy in cultivating a culture of integrity: faculty must reflect a common understanding of and commitment to upholding the expectations that students do their work with integrity (Löfström et al., 2015; Evans-Tokaryk, 2014; Jurdi et al., 2012; 2011; Wang, 2008; Christensen Hughes & McCabe, 2006). Language is also an important consideration: Mulholland’s analysis of Mount Royal University’s plagiarism policy critiques the dominant “moralistic and ethical” discourses of academic dishonesty in which students are “categorized…as honorable or shameful” (p. 105) and that obfuscate the responsibilities of the institution to educate students (p. 113). We have opportunities at UBC to rewrite our policies so that they speak clearly to students as well as faculty and staff, and—like the syllabus statement modelled by the “Cheating Hearts” group—do so in educative and proactive ways that all parties will recognize and take up. Ideally, these policies would be located outside as well as inside the academic calendar, so that they were more easily accessible, and the process for reporting misconduct would be equally clear and accessible, at the level of the institution, faculties, and departments.

The COVID-19 pandemic has, perhaps ironically, created several spaces for this advocacy to be effective at UBC and, arguably beyond, since remote learning has made urgent the need for conversations about AI and assessment that have involved many more faculty than pre-pandemic initiatives would have. Discussions of “remote proctoring” platforms have, similarly, fuelled broader engagement with questions of ethics and equity, as instructors who wish to avoid such platforms have to rethink classroom practices. Faculty frustration with forms of academic misconduct such as peers’ sharing of exam questions and course materials with each other and with “homework help” sites, apparent collaboration in online tests, and suspicions of contract cheating have bolstered calls for the institution to take a more explicit position on these issues—rather than leaving decisions up to individual departments or faculty members, which can then be seen as arbitrary and create perceptions of inequities—and provide support for staff and faculty to make the required pedagogical changes and to address issues such as copyright violations. Perhaps these challenges will result in a collective “teachable moment” about AI—and a change of heart.

Notes

  1. 1.

    This statement, and other teaching and learning materials developed by the “Cheating Hearts” project can be accessed at https://learningcommons.ubc.ca/faculty-resources/academic-integrity/.

  2. 2.

    For full assignment instructions, see https://learningcommons.ubc.ca/faculty-resources/academic-integrity/.