Introduction

As elsewhere in the world, academic misconduct is a serious problem in Aotearoa New Zealand (NZ hereafter). Yet, compared to other countries such as Australia, Canada, and the United States, we know relatively little about the extent of the problem here or the factors associated with it. Indeed, fewer than dozen empirical studies have investigated academic misconduct among students in NZ and none of them have included students from more than one institution. Consequently, our educational leaders and practitioners are left under-informed as they seek to address the problem and to promote academic integrity. To help provide the knowledge and insights needed to craft good policy and best practice, the Research on Academic Integrity in New Zealand (RAINZ) Project—a research collaboration involving academic and professional staff from eight tertiary institutions—was founded in 2021. The present study represents the RAINZ Project’s first empirical investigation of academic misconduct and the first ever multi-institutional investigation of such in NZ.

Theoretical Framework

Rooted in psychological theories of human functioning, the present study conceptualises academic misconduct (and behaviour more generally) as a function of personal and environmental factors (e.g., Bandura, 1986; Dewey, 1922; Lewin, 1936). As Dewey (1922) wrote long ago, moral conduct is not “mysteriously cooped up within personality…all conduct is an interaction between elements of human nature and the environment, natural and social” (p. 10). This theoretical principle of interactionism articulated by Dewey was made more widely known by Lewin’s (1936) behaviour equation—B = f (P, E), i.e., behaviour (B) is a function (f) of the person (P; their personality, motivation, and history) and their environment (E; their physical and social surroundings)—and Bandura’s (1986) reciprocal determinism (i.e., behaviour both affects and is affected by personal and environmental factors). As detailed below, the personal factors of interest in this study concern students’ moral attitudes towards academic misconduct, and the environmental factors include students’ perceptions of the academic integrity climate at their institution and their perceptions of peer norms related to academic misconduct.

Academic Misconduct in Aotearoa

Only a handful of empirical studies of students’ academic misconduct have been undertaken in NZ with each using different measures and none involving students from more than one institution (e.g., Adam et al., 2017; de Lambert et al., 2006; Henning et al., 2013; Stephens et al., 2021; Walker, 1998). Accordingly, it is difficult to make any firm claims about the prevalence of academic misconduct among undergraduates in NZ. That said, it appears to be high. Walker (1998) reported that staff believed plagiarism to be “commonplace” at their institution, with 86% reporting they had detected at least one case of plagiarism in their current role. Stephens et al. (2021) found that 77.4% and 75.9% (in 2012 and 2017, respectively) of undergraduates reported engagement in at least one of the eight forms of academic misconduct on their survey. Collusion (unpermitted collaboration) on an assignment was the most commonly reported behaviour in both cohorts (62.8% and 62.2%, respectively) and plagiarising a few sentences or paragraphs from the internet the second most common (47.7% and 47.1%, respectively). Importantly, there have been no studies in NZ on the prevalence of contract cheating or other forms of third party writing assistance, including students’ use of generative artificial intelligence. By accounting for these increasingly problematic behaviours and by including students from several institutions, the present investigation offers the most comprehensive study of academic misconduct undertaken in NZ.

Academic Integrity Climate

Students’ perceptions of the academic integrity climate reflect the extent the institutional culture is characterised and guided by policies and practices as well as norms and values that make academic integrity central and salient (e.g., McCabe et al., 2012). Fostering a culture of integrity is complex, requiring collegial discussion and student buy-in to create a holistic system (Bertram Gallant, 2011). A shared system—one that is clearly communicated, well-understood, and broadly supported—forms the foundation of that culture, allowing students to develop the knowledge, skills, and attitudes needed to “achieve with integrity” (Stephens, 2019). Decades of empirical research has found that students’ perceptions of institutional and staff support for academic integrity affect their engagement in academic misconduct (McCabe et al., 2012; O’Neill & Pfeiffer, 2012). For example, when students believed staff would report them and that the penalties were severe, they were less likely to engage in academic misconduct (O'Neill & Pfeiffer, 2012). Conversely, when students perceived that that staff did not take cheating seriously (Whitley, 1998) or that consequences would be negligible (Beasley, 2014), they were more likely to cheat.

Peer Norms

Consistent with the social psychological and social cognitive theories discussed above, students’ perceptions of peer attitudes and behaviour significantly affect their own attitudes and behaviours. In his seminal study, Bowers (1964) found that students who perceived weak peer disapproval of cheating were three times more likely to cheat compared to those who perceived peer disapproval to be strong. Similarly, students’ perceptions of peer engagement in academic misconduct have been positively associated with their own engagement in such behaviour (Malesky et al., 2021; McCabe et al., 2012; Rettinger & Kramer, 2009). Moreover, perceived peer cheating has led students to underestimate the seriousness of academic misconduct, as they perceive it as a normative behaviour among their peers (Zhao et al., 2022). Peer norms can exert a strong influence on students' decision-making processes and contribute to the normalisation of cheating within academic settings, particularly so when students have permissive attitudes towards cheating (O'Rourke et al., 2010).

Moral Attitudes

Students’ moral or ethical attitudes about academic misconduct have been conceptualised and assessed in various ways over the past half century. Two types of attitudes or judgements related to cheating have received the most attention: moral valence (e.g., its “seriousness” or “unacceptability”) and moral disengagement (i.e., the extent to which one rationalises or negates personal responsibility for cheating). Research has consistently found students’ self-reported academic misconduct to be negative associated with the former (e.g., Anderman et al., 1998; Murdock et al., 2004; Stephens, 2018) and positively associated with latter (Farnese et al., 2011; Haines et al., 1986; Stephens, 2018). In a study that included measures of moral valence and moral disengagement, Stephens (2018) found that the relations between students’ beliefs about cheating (“It’s morally wrong”) and their engagement in it (“I did it”) were fully mediated by moral disengagement (“It’s not my fault!”).

The Present Investigation

The purpose of the present investigation was to help provide tertiary educational leaders and decision-makers in NZ the knowledge and insights needed to craft good policy and best practice related to academic integrity. In order to do so, the RAINZ Project launched the first-ever nationwide survey of academic misconduct among undergraduate students. The survey was designed to answer the following questions:

  1. 1.

    How prevalent is academic misconduct among undergraduates in NZ? Specifically, what percent of students report engaging in various types of academic misconduct over the past 12 months? Which behaviours are most common?

  2. 2.

    Where do undergraduates learn about academic integrity? Which sources do students designate as the most informative?

  3. 3.

    To what extent are undergraduates’ perceptions and attitudes associated with engagement in academic misconduct? As depicted in Fig. 1, a conceptual model and set of testable hypotheses were developed based on the foregoing theoretical framework and literature reviewed. Specifically, participants’ engagement in academic misconduct was hypothesised to be: negatively associated with perceptions of a culture of integrity (H1a) and positively with perceptions of a culture of cheating (H1b); negatively associated with perceptions of peer disapproval of cheating (H2a) and positively with perceptions of peer cheating behaviour (H2b); and negatively associated with their judgement of cheating as moral unacceptability (H3a) and positively with their tendency for moral disengagement (H3b).

Fig. 1
figure 1

Conceptual model of hypotheses tested. Note. Dashed lines indicate negative associations between factors, and solid lines indicate positive associations between them

Method

To address the foregoing research questions, a cross-sectional quantitative study utilising self-report data from a nation-wide student survey was employed.

Participants

Participants included undergraduate students from seven tertiary institutions in NZ: four on the North Island and three on the South Island. A total of 7,875 undergraduates started the survey. Among those who started the survey, 4,666 (59.3%) completed the survey (i.e., they responded to most items, including those related to academic misconduct). Among those who completed it, 173 (3.7%) were excluded for one of the three reasons: insufficient response time (i.e., less than 300 s), dishonest responses (i.e., students who self-reported at the end of the survey that they were “Not at all honest” or “Not very honest” in their responses), and/or irregular response patterns (e.g., straight-lining/non-differentiation). As detailed in Table 1, the majority of 4,493 participants in the final dataset indicated they were between 16 and 19 (33.4%) or 20 and 24 (49.7%) years old, female (64.5%), and Pākehā (67.2%). These percentages, however, varied considerably among the seven institutions. For example, although the average percent of participants who indicated being 35 years of age or older was 5.7%, it was as low as 1.9% at one institution and as high as 40.8% at another. Similarly, although females comprised most of the sample at all institutions, the percentage ranged from 57.0 to 75.5%.

Table 1 Demographic characteristics of participants

Procedures

The sampling and recruitment procedures varied by institution. Five of the seven institutions invited all undergraduate students to participate with three of the five sending email invitations to students’ university accounts and the remaining two institutions soliciting participation via various advertisements (e.g., survey links on Facebook as well as flyers and posters about campus). Two of the seven institutions invited stratified (based on gender, year of study, and ethnicity) random samples of undergraduate students to participate in the study via emails to their university accounts. Response rates varied by institution, ranging from 4.4 to 28.7% with a mean of 15.6%. Regardless of the sampling or recruitment procedure employed, all participants completed the survey anonymously and online (via Qualtrics) and, at the end of the survey, were offered an opportunity to enter a prize draw for a $100 NZD Prezzy Card. This research was approved by The University of Auckland Human Participants Ethics Committee in 2022 (Reference Number UAHPEC23902) and, subsequently as needed (where reciprocal agreements were not in place), by other institutional ethics committees.

Measures

The survey used in this study was an adapted version of the McCabe-ICAI (International Center for Academic Integrity) Student Survey (e.g., Rettinger et al., under review). Specifically, we adapted the survey in four ways: (1) changing the spelling of words from American English to NZ English (e.g., “behavior” to “behaviour”); (2) replacing the word “professors” with “teachers” or “academic staff”; (3) removing several questions (e.g., parental educational attainment) and measures (e.g., achievement goal structures) to shorten the survey; and (4) adding four items to the measure of academic misconduct as well as an additional measure related to academic integrity learning. All measures are described below.

Academic Integrity Climate

Participants’ perceptions of the academic integrity climate at their institution were assessed with an 18-item measure designed to assess two latent constructs: culture of integrity (13 items; e.g., “The academic staff here clearly define what actions are considered to be cheating in their courses”) and culture of cheating (5 items; e.g., “Most students here ignore the academic integrity policy/regulations”). Participants indicated their agreement with the statements on a five-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree).

Peer Norms

Participants’ perceptions of peer norms at their institution were assessed with a 10-item measure designed to assess two latent constructs: peer disapproval of academic misconduct (5 items; e.g., “If I cheated on a test or exam, my friends would be really disappointed in me”) was assessed using a five-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree) and peer cheating was assessed by asking participants to indicate how often on a five-point scale (1 = Never to 5 = 11 or more times) they had “observed or had direct knowledge of students” engaging in academic misconduct (5 items; e.g., “Using unauthorized notes or sources during a test or exam”).

Moral Attitudes

Participants’ moral attitudes related to academic misconduct were assessed with a 12-item measure designed to assess two latent constructs: moral unacceptability of academic misconduct was assessed by asking participants to use a five-point scale (1 = Not at all morally/ethically wrong to 5 = Completely morally/ethically wrong) to indicate the extent to which they thought a set of behaviours were morally/ethically wrong (5 items; e.g., “Getting questions or answers from someone who has already taken a test or exam”) and moral disengagement was assessed by asking participants to use a five-point Likert scale (1 = Strongly disagree to 5 = Strongly Agree) to indicate the extent to which they agreed with statements that displaced or otherwise minimised personal responsibility for cheating (7 items; e.g., “It is OK to cheat to help one’s friends”).

Academic Misconduct

Participants’ engagement in academic misconduct was assessed with a 27-item measure designed to assess four types of misconduct: collusion (13 items); misuse of resources (8 items); fraud (3 items); and contract cheating (5 items). For each item, participants were asked to indicate how often (1 = Never; 2 = Once; 3 = 2–4 times; 4 = 5–10 times; 5 = 11 or more times; or “not applicable to my program”) during the last 12 months they engaged in the behaviours described. The 27 items included the 23 items used by Rettinger et al. (under review) as well as four original items: two related to collusion during tests or exams (“Working with other students or unauthorised individuals to complete an individual exam or test (including open book)” and “Communicating with other students during a test or exam when it is prohibited”) and two related to the use of artificial intelligence (“Using paraphrasing tools on someone else's writing and submitting it as your own” and “Using an artificial intelligence such as an online text generator to do your academic work and submitting it as your own”).

Academic Integrity Learning

An adapted version of a measure developed by McCabe et al. (2012) was used to assess participants’ learning about institutional policies related to academic integrity. Specifically, participants used a five-point scale (1 = Nothing to 5 = A lot) to report how much they had learned from various sources (12 items; e.g., “Orientation programme” and “Teacher discussions”).

Analysis

Data were first screened for missing, invalid responses, and other anomalies. Due to the highly skewed distribution of responses on the measure of academic misconduct, the items were dichotomised (where 0 = No, did not do it and 1 = Yes, did it at least once). Confirmatory factor analysis was then employed to confirm the structure and fit of all measurement models (results from these analyses are presented in the Online Resource). After establishing acceptable model fit, Cronbach’s alphas were calculated to assess the internal consistency of all factors. Finally, frequency statistics and Pearson correlation coefficients were employed to address the research questions and test hypotheses. All analyses were conducted using version 25 of SPSS and its AMOS programme.

Results

Descriptive statistics for all latent variables are presented first, followed by results related to the four research questions are presented.

Descriptive Statistics for All Latent Variables

Full sample descriptive statistics and the observed range among the seven institutions for all latent factors are detailed in Table 2. Three details merit comment. First, although Cronbach’s alpha for misuse of resources and fraud/contract cheating were low (0.66 and 0.67, respectively), all others were acceptable or good. Second, perceptions of peer cheating as well as academic misconduct and its three sub-factors were positively skewed (skewness > 1.00). As described above, the latter variables were dichotomised and the mean values reported below are mean sum scores (e.g., for academic misconduct, participants, on average, reported engaging in 2.65 of the 27 behaviours assessed). Finally, mean scores (for both full sample and the observed range among the seven institutions) for all variables hypothesised to be negatively associated with academic misconduct were above the scale midpoint (e.g., participants, on average, agreed with items on the culture of integrity measure) and below the scale midpoint for all variables hypothesised to be positively associated with academic misconduct (e.g., participants, on average, disagreed with items on the culture of cheating measure). Nonetheless, there were some notable between-institution differences, particularly with respect to academic misconduct (observed range = 0.99–3.18).

Table 2 Descriptive statistics for all latent variables: full sample means and standard deviations and observed range of mean scores among the seven institutions

Prevalence of Academic Misconduct

As detailed in Table 3, the most frequently reported behaviour was a form of collusion: “Working together on an assignment with the other students…” (C_5). On average, 30.3% of participants reported engaging in that behaviour at least once during the previous 12 months (observed range = 12.4% to 35.1%). The second to fourth most common forms of academic misconduct involved misuse of resources: unauthorised downloading or use of teacher’s materials (M_7) as well as two forms of plagiarism (M_1 and M_2). Given that this survey took place during the two months prior to the release of ChatGPT, it is also notable that 14.8% of participants reported “using an artificial intelligence” to do their academic work during the past year (Misuse_11n). Finally, most students (approximately two-thirds) reported engaging in at least one form of academic misconduct (observed range = 39.3–73.1%).

Table 3 Percent of Participants Reporting Engagement in Academic Misconduct: Full Sample Mean and Observed Range Among the Seven Institutions

Learning About Academic Integrity

As detailed in Table 4, the source of participants’ learning about academic integrity varied considerably. “Course outlines or syllabus” were the most common sources of learning about AI polices with 57.9% indicating Very much or A lot (range = 54.7–63.9) and “Non-institutional social media sources” were the least popular source with only 6.1% of students indicating that they learned Very much or A lot from them (range = 0.0–15.3). In terms of institutional sources, “Student organisation/association/advocacy group” was the least popular source with only 9.7% of students indicating Very much or A lot (range = 4.7–24.0).

Table 4 How much students learn about academic integrity policies from various sources: sample means and observed range among the seven institutions

Perceptions, Attitudes, and Academic Misconduct

As hypothesised, participants’ engagement in academic misconduct was negatively associated with their perceptions of a culture of integrity (r = − .19) and positively with their perceptions of a culture of cheating (r = 0.26); negatively with perceptions of peer disapproval of cheating (r = − .34) and positively with perceptions of peer cheating behaviour (r = .49); and negatively with their judgement of cheating as moral unacceptability (r = − .33) and positively with their tendency for moral disengagement (r = .42). As also detailed in Table 5, the strength of hypothesised associations varied among the seven institutions. For example, the correlations between academic misconduct and culture of integrity were the lowest, ranging from − .14 (a small effect) to − .31 (a medium effect). By contrast, the correlations between academic misconduct and peer cheating were the highest, ranging from .42 (a medium effect) to .55 (a large effect).

Table 5 Tests of hypotheses: associations between participants' perceptions, attitudes, and academic misconduct

Discussion

Summary of Results

The RAINZ project was formed with the purpose of promoting academic integrity, and the present investigation was designed to garner the knowledge and insights needed to help tertiary educational leaders and decision-makers to craft good policy and best practice related to academic integrity. This study—the first-ever nationwide survey of academic misconduct among undergraduate students in NZ—found that the majority of participants (64.8%) reported engaging in at least one form of academic misconduct (2.65, on average) in the previous year with unpermitted collaboration (i.e., “Working together on an assignment with other students when the teacher asked for individual work”), a form of collusion, being the most common breach (30.3%). Although the (unpermitted) use of generative artificial intelligence was comparatively low (14.8%), this survey was completed in the months prior to the release of ChatGPT and will likely have increased substantially over the past year. This study also found that “course outlines or syllabus” and “teacher discussions” were designated as the most informative sources with respect to participants’ learning about academic integrity and “Student organisation/association/advocacy group” were among the least informative. Finally, results offered support all the hypotheses:

  1. o

    As participants’ perceptions of an institutional culture of integrity and peer disapproval of cheating as well as their own belief in the moral unacceptability increased, their reported engagement in academic misconduct decreased; and

  2. o

    As participants’ perceptions of an institutional culture of cheating and peer cheating as well as their tendency for moral disengagement increased, so too did their engagement in academic misconduct.

Implications for Policy and Practice

The findings of this study suggest several implications for educational policy and practice. Firstly, the prevalence of academic misconduct among undergraduates warrants discussion and action at the highest levels of leadership—within, between, and beyond individual institutions. The latter might include the Academic Quality Agency for New Zealand Universities broadening its oversight and requirements. Precedence (and perhaps a model) for such action can be found in the compliance mandates created by Australia’s Tertiary Education Quality and Standards Agency. For example:

section 5.2 requires providers to implement procedures and policies that uphold academic integrity and address misconduct or allegations of misconduct. Preventative action must be taken to mitigate foreseeable risks or to prevent the recurrence of identified breaches. Guidance on integrity must be provided to students, and providers are responsible for ensuring that third-party delivery of teaching does not compromise academic integrity. https://www.teqsa.gov.au/guides-resources/compliance-focus/compliance-focus-academic-integrity#our-role

Second, even in the absence of compliance mandates, the findings suggest that institutions (their leaders and decision-makers) should undertake efforts to create and sustain cultures of academic integrity. This is neither a novel suggestion nor an easy one, but decades of research have shown such cultures are possible and have been associated with lower levels of academic misconduct (e.g., Malesky et al., 2021; McCabe et al., 2012; O'Neill & Pfeiffer, 2012; Vandehey et al., 2007). Numerous models and resources are available for promoting academic integrity. The International Center for Academic Integrity (2021), for example, offers a set of “fundamental values” as well as other resources for getting started. Others, such as Bertram Gallant (2011) and Stephens (2016, 2019), have proposed systems-based, multi-level approaches for developing cultures of integrity.

Third, given the moderate to strong associations between peer norms and academic misconduct found in this study and others (for a meta-analysis of the "perceived peer cheating effect", see Zhao et al., 2022), policy and practice should seek to “transform the norm” (Stephens, 2019). Previous research has shown that social influence campaigns involving the use of peer models—typically those deemed credible and/or popular—might offer the best way to change attitudes and behaviours (Haines, 1996). Findings from the present study indicate that student organisations, associations, and advocacy groups are currently among the least informative sources with respect to student learning about academic integrity. This is a shame but also an opportunity. Much more can and should be done to involve students creating a culture of integrity.

Finally, as found in the present study and others before it (Anderman et al., 1998; Stephens, 2018), students’ moral attitudes—their beliefs about the acceptability of academic misconduct and their personal responsibility for refraining from engagement in it—matter. Accordingly, efforts to educate students about the meaning and importance of academic integrity as well as their responsibilities for exemplifying the values and actions associated with it need to take place early and often. As suggested by Stephens et al. (2021), one-off academic integrity courses to be completed upon matriculation might be a good start but are insufficient on their own. Pre-commitment strategies and follow-up reminders are also needed. For example, experimental research has shown that students were significantly less likely to cheat and to justify dishonesty through moral disengagement after signing or even just reading an academic integrity statement (Shu et al., 2011).

Limitations and Future Directions for Research

The present study used a cross-sectional research design, where data were collected at a single point of time via an anonymous self-report survey. Although convenient and cost-effective, this design and method of data gathering presents limits with respect to making firm causal claims. Future studies should employ longitudinal or experimental designs to assess the causal effects of students’ perceptions and attitudes on academic misconduct. Another important limitation of the present study concerns its reliance on self-report measures. Future research should seek to include additional types of measures, including interviews, document analysis, and observations. Towards these ends, the RAINZ Project plans to conduct additional studies in the years ahead that will both replicate and extend research design and measures used in the present investigation. Accordingly, the findings reported here will serve as a baseline against which future results can be compared. Finally, although the 40% non-completion rate observed in this study is not unusual for a multi-page online survey (Spennemann, 2022), future research should take measures to reduce potential attrition bias.

Conclusion

Although seemingly intractable, the widespread problem of academic misconduct is not inevitable. Academic integrity can be cultivated. Indeed, it must be cultivated. At stake is nothing less than the value, validity, and credibility of not only our exams and all other assessments but also the degrees and credentials to which they lead. As the emergence and increasing accessibility of generative artificial intelligence presents new opportunities to instruction and assessment, it also brings new challenges to academic integrity. The time to act is now. We hope the findings of this study will provide the impetus and roadmap needed for moving forward.