Abstract
Purpose
We sought to provide empirical insight and develop theory for a new organizational phenomenon: remote proctoring for Internet-based tests. We examined whether this technology is effective at decreasing cheating and whether it has unintended effects on test-taker reactions, performance, or selection procedures.
Design/methodology/approach
Participants (582) were randomly assigned to a webcam proctored or honor code condition and completed two (one searchable, one non-searchable) cognitive ability tests online. Complete data were collected from 295 participants. We indirectly determined levels of cheating by examining the pattern of test-score differences across the two conditions. We directly measured dropout rates, test performance, and participants’ perceived tension and invasion of privacy.
Findings
The use of remote proctoring was associated with more negative test-taker reactions and decreased cheating. Remote proctoring did not directly affect test performance or interact with individual differences to predict test performance or test-taker reactions.
Implications
Technological advances in selection should be accompanied by empirical evidence. Although remote proctoring may be effective at decreasing cheating, it may also have unintended effects on test-taker reactions. By outlining an initial classification of remote proctoring technology, we contribute to the theoretical understanding of technology-enhanced assessment, while providing timely insight into the practice of Internet-based testing.
Originality/value
We provide timely insight into the development and evaluation of remotely proctored tests. The current study utilizes a unique randomized experimental design in order to indirectly determine levels of cheating across two conditions. Following the results of the current study, we outline an integrative model for future research on remotely proctored tests.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
HIT refers to a Mechanical Turk Human Intelligence Task. Further information on Mechanical Turk is provided in the following section.
Conducting three independent samples t tests inflates the family-wise Type I error rate to 0.14. Applying a family-wise error correction procedure, such as a Bonferonni correction renders this test non-significant. However, family-wise error corrections may come at the cost of Type II error. Given the assumptions of family-wise error correction procedures, the pattern of mean differences, the strength of the theoretical justification for this effect, and the estimated effect size, we chose not to apply this correction.
References
Aiello, J. R. (1993). Computer-based work monitoring: Electronic surveillance and its effects. Journal of Applied Social Psychology, 23(7), 499–507.
Alder, G. S. (1998). Ethical issues in electronic performance monitoring: A consideration of deontological and teleological perspectives. Journal of Business Ethics, 17(7), 729–743.
Alder, G. S., & Ambrose, M. L. (2005). An examination of the effect of computerized performance monitoring feedback on monitoring fairness, performance, and satisfaction. Organizational Behavior and Human Decision Processes, 97(2), 161–177.
Alge, B. J. (2001). Effects of computer surveillance on perceptions of privacy and procedural justice. Journal of Applied Psychology, 86(4), 797–804. doi:10.1037/0021-9010.86.4.797.
Alge, B. J., Ballinger, G. A., Tangirala, S., & Oakley, J. (2006). Information privacy in organizations: Empowering creative and extrarole performance. Journal of Applied Psychology, 91(1), 221–232. doi:10.1037/0021-9010.91.1.221.
Arch, E. C. (1987). Differential responses of females and males to evaluative stress: Anxiety, self-esteem, efficacy, and willingness to participate. In R. Schwarzer, H. Van der Ploeg, & C. Spielberger (Eds.), Advances in test anxiety research (Vol. 5, pp. 97–106). Isse: Swets & Zeitlinger.
Arthur, W, Jr, Glaze, R. M., Villado, A. J., & Taylor, J. E. (2009). Unproctored internet-based tests of cognitive ability and personality: Magnitude of cheating and response distortion. Industrial and Organizational Psychology, 2(1), 39–45. doi:10.1111/j.1754-9434.2008.01105.x.
Bauer, T. N., Truxillo, D. M., Sanchez, R. J., Craig, J. M., Ferrara, P., & Campion, M. A. (2001). Applicant reactions to selection: Development of the selection procedural justice scale (SPJS). Personnel Psychology, 54(2), 387–419. doi:10.1111/j.1744-6570.2001.tb00097.x.
Beaty, J. C., Nye, C. D., Borneman, M. J., Kantrowitz, T. M., Drasgow, F., & Grauer, E. (2011). Proctored versus unproctored Internet tests: Are unproctored noncognitive tests as predictive of job performance? International Journal of Selection and Assessment, 19(1), 1–10. doi:10.1111/j.1468-2389.2011.00529.x.
Behrend, T. S., Sharek, D. J., Meade, A. W., & Wiebe, E. N. (2011). The viability of crowdsourcing for survey research. Behavior Research Methods, 43(3), 800–813. doi:10.3758/s13428-011-0081-0.
Bond, C. F., & Titus, L. J. (1983). Social facilitation: A meta-analysis of 241 studies. Psychological Bulletin, 94(2), 265–292. doi:10.1037//0033-2909.94.2.265.
Buchanan, T., & Smith, J. L. (1999). Using the internet for psychological research: personality testing on the world wide web. British Journal of Psychology, 90(1), 125–144. doi:10.1348/000712699161189.
Buhrmester, M. D., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1), 3–5. doi:10.1177/1745691610393980.
Chalykoff, J., & Kochan, T. A. (1989). Computer-aided monitoring: Its influence on employee job satisfaction and turnover. Personnel Psychology, 42(4), 807–834. doi:10.1111/j.1744-6570.1989.tb00676.x.
Davidson, R., & Henderson, R. (2000). Electronic performance monitoring: A laboratory investigation of the influence of monitoring and difficulty on task performance, mood state, and self-reported stress levels. Journal of Applied Social Psychology, 30(5), 906–920. doi:10.1111/j.1559-1816.2000.tb02502.x.
Deci, E. L., Eghrarl, H., Patrick, B. C., & Leone, D. R. (1994). Facilitating internalization: The self-determination theory perspective. Journal of Personality, 62(1), 119–142. doi:10.1111/j.1467-6494.1994.tb00797.x.
DeShon, R. P., & Gillespie, J. Z. (2005). A motivated action theory account of goal orientation. Journal of Applied Psychology, 90(6), 1096–1127.
DeTienne, K. B. (1993). Big brother or friendly coach? Computer monitoring in the 21st century. The Futurist, 27(5), 33–37.
Drasgow, F., Nye, C. D., Guo, J., & Tay, L. (2009). Cheating on proctored tests: The other side of the unproctored debate. Industrial and Organizational Psychology, 2(1), 46–48. doi:10.1111/j.1754-9434.2008.01106.x.
Eisenberg, A. (2013). Keeping an eye on online test-takers. The New York Times. Retrieved May 9, 2013, from http://www.nytimes.com/2013/03/03/technology/new-technologies-aim-to-foil-online-course-cheating.html.
Foster, D. (2009). Secure, online, high-stakes testing: Science fiction or business reality? Industrial and Organizational Psychology, 2(1), 31–34. doi:10.1111/j.1754-9434.2008.01103.x.
George, C. E., Lankford, J. S., & Wilson, S. E. (1992). The effects of computerized versus paper-and-pencil administration on measures of negative affect. Computers in Human Behavior, 8(2–3), 203–209. doi:10.1016/0747-5632(92)90004-X.
Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review of Educational Research, 58(1), 47–77. doi:10.3102/00346543058001047.
Hong, S.-M., & Page, S. (1989). A psychological reactance scale: Development, factor structure and reliability. Psychological Reports, 64(3c), 1323–1326. doi:10.2466/pr0.1989.64.3c.1323.
Kolb, K. J., & Aiello, J. R. (1996). The effects of electronic performance monitoring on stress: Locus of control as a moderator variable. Computers in Human Behavior, 12(3), 407–423.
Landers, R. N., & Sackett, P. R. (2012). Offsetting performance losses due to cheating in unproctored Internet-based testing by increasing the applicant pool. International Journal of Selection and Assessment, 20(2), 220–228. doi:10.1111/j.1468-2389.2012.00594.x.
Leary, M. R. (1983). A brief version of the fear of negative evaluation scale. Personality and Social Psychology Bulletin, 9(3), 371–375. doi:10.1177/0146167283093007.
Lievens, F., & Burke, E. (2011). Dealing with the threats inherent in unproctored Internet testing of cognitive ability: Results from a large-scale operational test program. Journal of Occupational and Organizational Psychology, 84(4), 817–824. doi:10.1348/096317910X522672.
Macan, T. H., Avedon, M. J., Paese, M., & Smith, D. E. (1994). The effects of applicants’ reactions to cognitive ability tests and an assessment center. Personnel Psychology, 47(4), 715–738. doi:10.1111/j.1744-6570.1994.tb01573.x.
McCarthy, J. M., Van Iddekinge, C. H., Lievens, F., Kung, M. C., Sinar, E. F., & Campion, M. A. (2013). Do candidate reactions relate to job performance or affect criterion-related validity? A multistudy investigation of relations among reactions, selection test scores, and job performance. Journal of Applied Psychology, 98(5), 701–719. doi:10.1037/a0034089.
McNall, L. A., & Roch, S. (2007). Effects of electronic monitoring types on perceptions of procedural justice, interpersonal justice, and privacy. Journal of Applied Social Psychology, 37(3), 25. doi:10.1111/j.1559-1816.2007.00179.x.
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114(3), 449–458. doi:10.1037//0033-2909.114.3.449.
Moore, C., Detert, J. R., Treviño, L. K., Baker, V. L., & Mayer, D. M. (2012). Why employees do bad things: Moral disengagement and unethical organizational behavior. Personnel Psychology, 65(1), 1–48. doi:10.1111/j.1744-6570.2011.01237.x.
Naglieri, J. A., Drasgow, F., Schmit, M., Handler, L., Prifitera, A., Margolis, A., et al. (2004). Psychological testing on the internet: New problems, old issues. American Psychologist, 59(3), 150–162. doi:10.1037/0003-066X.59.3.150.
Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
Prabhakar, S., Pankanti, S., & Jain, A. K. (2003). Biometric recognition: Security and privacy concerns. IEEE Security and Privacy, 1(2), 33–42. doi:10.1109/MSECP.2003.1193209.
Raven, J., Raven, C. J., & Court, H. J. (2003). Manual for Raven’s Progressive Matrices and Vocabulary Scales, Section 3. San Antonio, TX: Harcourt Assessment.
Reynolds, D. H., & Dickter, D. N. (2010). Technology and employee selection. In J. L. Farr & N. T. Tippins (Eds.), Handbook of Employee Selection. Clifton, NJ: Psychological Press.
Reynolds, D. H., & Weiner, J. A. (2009). Online Recruiting and Selection: Innovations in Talent Acquisition. Chichester: Wiley-Blackwell.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78.
Schleifer, L. M., & Shell, R. L. (1992). A review and reappraisal of electronic performance monitoring, performance standards and stress allowances. Applied Ergonomics, 23(1), 45–53. doi:10.1016/0003-6870(92)90010-S.
Smither, J. W., Reilly, R. R., Millsap, R. E., Pearlman, K., & Stoffey, R. W. (1993). Applicant reactions to selection procedures. Personnel Psychology, 46(1), 49–76. doi:10.1111/j.1744-6570.1993.tb00867.x.
Stanton, J. M. (2000). Reactions to employee performance monitoring: Framework, review, and research directions. Human Performance, 13(1), 85–113. doi:10.1207/S15327043HUP1301_4.
Stone, E. F., Gueutal, H. G., Gardner, D. G., & McClure, S. (1983). A field experiment comparing information-privacy values, beliefs, and attitudes across several types of organizations. Journal of Applied Psychology, 68(3), 459–468. doi:10.1037//0021-9010.68.3.459.
Tendeiro, J. N., Meijer, R. R., Schakel, L., & Meij, A. M. M.-D. (2012). Using cumulative sum statistics to detect inconsistencies in unproctored Internet testing. Educational and Psychological Measurement, 73(1), 143–161. doi:10.1177/0013164412444787.
Thompson, L. F., Sebastianelli, J. D., & Murray, N. P. (2009). Monitoring online training behaviors: Awareness of electronic surveillance hinders E-learners. Journal of Applied Social Psychology, 39(9), 2191–2212.
Tippins, N. T. (2009). Internet alternatives to traditional proctored testing: Where are we now? Industrial and Organizational Psychology, 2(1), 2–10. doi:10.1111/j.1754-9434.2008.01097.x.
Tippins, N. T., Beaty, J. C., Drasgow, F., Gibson, W. M., Pearlman, K., Segall, D. O., et al. (2006). Unproctored internet testing in employment settings. Personnel Psychology, 59(1), 189–225. doi:10.1111/j.1744-6570.2006.00909.x.
Vandewalle, D. (1997). Development and validation of a work domain goal orientation instrument. Educational and Psychological Measurement, 57(6), 995–1015. doi:10.1177/0013164497057006009.
Watson, A. M., Thompson, L. F., Rudolph, J. V., Whelan, T. J., Behrend, T. S., & Gissel, A. L. (2013). When big brother is watching: Goal orientation shapes reactions to electronic monitoring during online training. Journal of Applied Psychology,. doi:10.1037/a0032002.
Acknowledgments
The authors are grateful to Michael Acquah, Cecilia Ramirez, and The George Washington University’s Workplaces and Virtual Environments (WAVE) lab for their assistance in study design and to Shedon Zedeck, Frederick Oswald, and two anonymous reviewers for their insightful comments and feedback on earlier revisions of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Karim, M.N., Kaminsky, S.E. & Behrend, T.S. Cheating, Reactions, and Performance in Remotely Proctored Testing: An Exploratory Experimental Study. J Bus Psychol 29, 555–572 (2014). https://doi.org/10.1007/s10869-014-9343-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10869-014-9343-z