Student Perceptions of Faculty Use of Cheating Deterrents
- 241 Downloads
Evidence is provided on faculty use of cheating deterrents for in-class exams. The evidence comes from a survey of students who report on their most recent in-class exam in a randomly selected course that they are taking. Three types of cheating are considered: (i) advance knowledge of exam questions; (ii) copying; and (iii) other improper student actions during the exam. The deterrents examined consist of the following: (i) a rate of repeating questions; (ii) multiple versions of the exam and seating arrangements; and (iii) monitoring. The sample size is small but may cover about one-fourth of the faculty at the institution at which the survey was conducted.
KeywordsCheating Deterrents Faculty
Evidence of student cheating has been mounting for decades. For example, Premeaux (2005) finds in a survey that students state that they think that about 30 % of students “cheat on a typical exam” and about 45 % of students “cheat on a typical written assignment.”
Although cheating is prevalent, catching cheaters is not. Diekhoff et al. (1996) find in a survey that only 2.5 % of students report ever getting “caught cheating during their tenure as college students.” One reason for this low rate is that faculty choose to avoid the problem of catching cheaters. Avoidance can be achieved by steering clear of situations in which cheating is likely to be observed, or, if cheating is observed, faculty can simply ignore it. Tabachnick et al. (1991) find in a survey that 21 % of faculty report “ignoring strong evidence of cheating.” Coren (2011) finds in a survey that 40 % of faculty report that they ignored “student cheating on one or more occasions.”
Why do faculty ignore cheating? Keith-Spiegel et al. (1998) find in a survey that faculty report that their reasons for ignoring cheating include insufficient evidence, stress, effort, fear, and denial. In a survey, Staats et al. (2009) find similar results when asking students why faculty ignore cheating. One form of denial is to maintain that something occurs at a rate that is not consistent with the evidence. Volpe et al. (2008) find in a survey that faculty underestimate the incidence of cheating. Brown et al. (2010) find similar results in a survey of business school deans.
Comparing the evidence on the incidence of cheating to the evidence on catching cheaters indicates that faculty are unlikely to confront cheaters. What else could faculty do about cheating? One approach that does not involve faculty confronting students about cheating is for faculty to deter student cheating.
Only three studies were found that contained evidence on the use of cheating deterrents. Wright and Kelly (1974) find in a survey that 64 % of students “thought that faculty supervision of examinations was conducive to cheating.” In surveys of students and faculty, Barnett and Dalton (1981) find that 21 % of students and 48 % of faculty report that “proctors remain alert throughout the exam to cases of cheating.” They also find that 60 % of students and one-third of faculty report that “instructors give the same exam to more than one section of the same class.” Graham et al. (1994) find in a survey of faculty that “20 % reported that they do not watch students while they are taking tests.” The contribution of the present paper is that it adds to the evidence of faculty use of cheating deterrents for in-class exams. The evidence comes from a survey of students.
Faculty actions will be examined for three types of student cheating on in-class exams: advance knowledge of exam questions; copying; and other improper student actions during the exam.
Advance knowledge of exam questions can be obtained from a student who has just taken the exam in a different section of the course or from a bank of questions created by students over time. Lovett-Hooper et al. (2007) find in a survey that 85 % of students report “getting questions or answers from someone who has already taken a test.” Bernardi et al. (2008) find in an open-ended survey that students state that an “effective way to deter cheating” is to use “different tests each time.”
Copying is a common method of cheating. Diekhoff et al. (1996) find in a survey that 26 % of students report that they cheated by “copying from someone else’s exam.” Houston (1976, 1983, 1986) provides experimental evidence that spaced seating, multiple versions of the exam within a section, and assigned seating deter copying on multiple choice exams.
Improper student actions during the exam include using crib notes, using electronic devices, and talking. Monitoring has the potential to discourage these actions. Diekhoff et al. (1996) find in a survey that students report that they “are most deterred from cheating by fear of embarrassment should they be caught.” Kerkvliet and Sigmund (1999) find in a survey of students that “more proctors per student seemed effective in reducing cheating.”
Near the end of the semester, the survey was administered in two sections of a course required for all business majors and one upper level course required only for a particular major within business. The students (N = 39) were given two sheets of paper: the first contained a method for selecting the course asked about on the survey; the second contained the survey. On the first sheet, the students were instructed to list up to four 3-credit courses (excluding the present course) in which they had an exam this semester. Then, the students were instructed to apply the given method to randomly select one course from their list by using personal information known only to the student. Students did not turn in their list or identify the course selected. The purpose of this procedure was to have the course randomly selected and known only to the student. Because the course was anonymous, the instructor of the course was also anonymous. Finally, the survey was anonymous and voluntary (89 % participated). No reward was given for participation.
For the course selected on the white page, list the percentages for the components of the course grade. The total should add up to 100 %. — in-class exams; in-class quizzes; take-home exams; take-home quizzes; term papers or research papers; projects; presentations; homework; class participation; other, describe _.
Survey with tallies of student responses
For question 5, 32 of 38 respondents report multiple sections of their course. Because 23 students answered question 6, 72 % (23 of 32) of students who reported multiple sections had at least some idea of whether another section had the same exam as their section.
Questions 6 and 2 pertain to repetition of questions. For question 6, 35 % of respondents report that the exam in their section was not the same as the exam in an earlier or later section. For question 2, 74 % of respondents report that not more than a small fraction of the exam consisted of questions used within the last 2 years. Combining questions 6 and 2, 19 % of those who respond to both questions report different exams in different sections and not more than a small fraction of questions were used within the last 2 years.
Questions 3, 7, and 8 pertain to copying. For question 3, 29 % of respondents report that there was more than one version of the exam in their section. For question 7, 16 % of respondents report that seats were assigned randomly. For question 8, 50 % of respondents report that spaced seating was used when possible. Combining questions 3, 7, and 8, 10 % of those who respond to all three questions report that there was more than one version of the exam in their section, seats were assigned randomly, and, when possible, spaced seating was used. Alternatively, 40 % of those who respond to all three questions report that there was only one version of the exam in their section, seats were not assigned randomly, and spaced seating was not used even when it was at least partially possible.
Questions 9, 10, and 11 pertain to monitoring. For question 9, 74 % of respondents report that the proctor did not leave the room. For question 10, 42 % of respondents report that during the time that the proctor was in the room, the proctor periodically walked around the room. For question 11, 24 % of respondents report that during the time that the proctor was in the room, the proctor spent all of his time monitoring the students. Combining questions 9, 10, and 11, 11 % of those who respond to all three questions report that the proctor did not leave the room, periodically walked around the room, and spent all of his time monitoring the students.
Question 1 pertains to the format of the exam. All four students who selected “other” on this question described the component as “short answer” or its equivalent. For students who report that at least half of the exam consisted of essays or short answers, and who answered question 6, 20 % (2 of 10) report that the exam in their section was not the same as the exam in an earlier or later section.
From question 1, objective questions (multiple choice or matching) make up 50 % of the average exam and essay questions make up 30 % of the average exam. Also, 55 % of respondents to question 1 report an essay component.
Define a low repeat rate as follows: (i) different exams in different sections; and (ii) not more than a small fraction of questions used within the last 2 years. Faculty who produce their own questions probably find that this is quite time consuming. I speculate that achieving a low repeat rate over 5 years would require high effort and would be an effective deterrent under some conditions; 2 years would require low to moderate effort and would be a feeble deterrent under most conditions. Therefore, as defined here, a low repeat rate should be interpreted as evidence of at least low effort in deterring improper advance knowledge of exam questions. As shown earlier, 19 % of respondents report a low repeat rate. Note that because different sections of a course may have different instructors, reports of different exams in different sections do not imply that an instructor produced more than one exam. Therefore, the “19 %” may overstate the fraction of instructors exerting effort in this regard.
Define strong hindering of copying as follows: (i) more than one version of the exam in a section; (ii) seats assigned randomly; and (iii) when possible, spaced seating. Strong hindering of copying requires only low effort. As shown earlier, 10 % of respondents report strong hindering of copying.
Define close monitoring as follows: (i) the proctor did not leave the room; (ii) periodically walked around the room; and (iii) spent all of his time monitoring. Close monitoring requires only low effort. As shown earlier, 11 % of respondents report close monitoring.
The evidence provided here indicates that some faculty exert at least low effort to deter cheating. Define an externally awarded benefit as a benefit awarded by someone other than the recipient. Salary and tenure are externally awarded benefits but effort to deter cheating is not a factor for these awards. A plausible example in which effort to deter cheating leads to an externally awarded benefit is the case where student observation of faculty use of cheating deterrents enhances the classroom environment and thereby makes teaching easier. That is, students award the benefit to the instructor through their behavior.
One potential student behavior is effort. Eisenberger and Shank (1985) find that high effort training is negatively related to cheating. In addition, they argue that “a developmental history of reward for high effort in numerous tasks contributes to an individual’s general interest and satisfaction in performing tasks industriously.” Greenberg (1979) finds that high work-ethic individuals feel that “workers should be rewarded for performance” based on ability or effort, but that low work-ethic individuals “seem to feel that [this] is unfair.” Greenberg adds that low work-ethic individuals “apparent aversion toward recognition of individual achievement is consistent with their interest in being able to get something for nothing.” In principle, faculty use of cheating deterrents makes it more difficult to get something for nothing. So, does this lead to higher student effort, higher achievement, or a stronger work ethic? This is an open question.
Externally awarded benefits for faculty who exert effort to deter cheating appear to be small and scarce. Because costs are substantial but externally awarded benefits are not, faculty effort to deter cheating appears to be motivated primarily by internally awarded benefits. The notion that internally awarded benefits can motivate behavior is supported by Newstead et al. (1996); the evidence in their survey of students can be interpreted as a student’s own standards are an important reason for not cheating.
The evidence provided here also raises other questions. Today, students are routinely exposed to the rhetoric of ethics. But, students also observe actions, e.g., faculty use, or lack of use, of cheating deterrents. This confluence suggests the following questions. By experiencing both rhetoric and action, what do students learn about the following: (i) ethical talk; and (ii) ethical action.
Unknown Duplicate Reports
Students were instructed to select the course asked about on the survey. This selection thereby determined the instructor. If the course and instructor were known, then duplicate reports for the pair (course and instructor) could be eliminated. However, by design, the course and instructor were anonymous. This design was adopted to encourage honest responses on the survey and to avoid asking questions about individual instructors without their knowledge. Because duplicate reports are unknown, the survey does not provide unambiguous evidence of the frequency of faculty use of cheating deterrents.
Small Sample Size
The number of students taking the survey is only 39. A rough idea of the number of unique pairs (course and instructor) can be obtained by comparing the responses across students to two questions: the question about components of the course grade (stated in the “Procedure” section); and the first question in Table 1. Making this comparison indicates that there are no identical sets of responses to these two questions. Of course, responses may differ because of varying interpretations of the categories or simply because of mistakes. For example, if “essay” is substituted for “open-ended problem,” then two sets of responses are similar. Using similar reasoning, two other sets of responses are similar. I conjecture that the survey covers about one-fourth of the 130 faculty at the institution at which the survey was conducted. Given the small number of unique pairs (course and instructor), this paper is best viewed as a limited look at faculty use of cheating deterrents.
Questions Designed for a Specific Institution
At the instant institution, it is uncommon for multiple sections of a course to take the same exam (except for a final exam) at the same time. If this were common, then question 6 should be skipped by students for whom this occurs and the question should be reworded to state at “an earlier or later time.”
Questions 9, 10, and 11 refer to a proctor. At the instant institution, it is uncommon for the instructor not to be the proctor. If this were common, then a question about the identity of the proctor should be added.
For questions 2, 3, 4, and 6, it seems unlikely that many students are using personal knowledge (for example, seeing copies of the other exam) to provide their answers. Instead, for these questions, it seems likely that most students are basing their answers on information obtained by talking to other students.
I thank an anonymous reviewer and participants at the PEA Conference for helpful comments. The views in this manuscript are mine and are not necessarily the views of any organization with which I am affiliated.
- Barnett, D., & Dalton, J. (1981). Why college students cheat. Journal of College Student Personnel, 22(6), 545–551.Google Scholar
- Brown, B., Weible, R., & Olmosk, K. (2010). Business school deans on student academic dishonesty: a survey. College Student Journal, 44(2), 299–308.Google Scholar
- Graham, M., Monday, J., O’Brien, K., & Steffen, S. (1994). Cheating at small colleges: an examination of student and faculty attitudes and behaviors. Journal of College Student Development, 35, 255–260.Google Scholar
- Kerkvliet, J., & Sigmund, C. (1999). Can we control cheating in the classroom. The Journal of Economic Education, 30, 331–343.Google Scholar
- Volpe, R., Davidson, L., & Bell, M. (2008). Faculty attitudes and behaviors concerning student cheating. College Student Journal, 42(1), 164–175.Google Scholar