Abstract
Even when relatively infrequent, careless and random responding (C/RR) can have robust effects on individual and group data and thereby distort clinical evaluations and research outcomes. Given such potential adverse impacts and the broad use of self-report measures when appraising addictions and addictive behavior, the detection of C/RR can reduce error substantially. Based on earlier research using a video game questionnaire as an exemplar, we cross-validated promising items for detecting C/RR and developed an expanded set of items, in this case using an Internet questionnaire to examine efficacy and generalization. Research participants were instructed to complete the questionnaire in standard fashion (i.e., cooperatively) or to adopt either a careless or random response set. Careless and random responders often obtained elevated mean questionnaire scores. Most items for detecting careless or random responding demonstrated significant differences across groups, and combinations of items showed high levels of accuracy, particularly in detecting random responders. Results suggest that a relatively small number of items, which might only add a minute or two to questionnaire completion time, can detect the great majority of random responders and most careless responders. Guidelines are provided for the development and application of items.
Similar content being viewed by others
Notes
While these projections are not exact given the vagaries of random response patterns, these are reasonable approximations of what may occur and illustrate why further research on this topic is needed.
For purposes of clarity, the example assumes that none of the C/RR will also comprise positive cases, which is unlikely to be literally true. However, unless the rate of true cases is exceedingly high among careless or random responders who obtain elevated scores, a marked distortion in outcome will still result, although perhaps not quite as extreme as we have projected. Our basic intent here is to illustrate potential problems caused by non-detection of C/RR rather than to arrive at precise figures.
The more general or generic types of items we designed are not specific to the type of questionnaire being used and have the potential for broad application, although the disconnection with specific questionnaire content raises certain concerns. Most of the other questions involved video game use because that is among the researchers’ interests. However, our study design allowed us to examine the impact of C/RR on an Internet use questionnaire and the effectiveness of more specific items in the same or a related domain for detecting C/RR. Understandably, researchers may want to modify item content to match their specific area of interest more closely.
Interested readers can contact the first author for further details about our statistical analyses and outcomes.
References
Armstrong, L., Phillips, J. G., & Saling, L. L. (2000). Potential determinants of heavier Internet usage. International Journal of Human-Computer Studies, 53, 537–550.
Baer, R. A., Ballenger, J., Berry, D. T. R., & Wetter, M. W. (1997). Detection of random responding on the MMPI-A. Journal of Personality Assessment, 68, 139–151.
Beach, D. A. (2001). Identifying the random responder. The Journal of Psychology, 123, 101–103.
Brockmyer, J. H., Fox, C. M., Curtiss, K. A., McBroom, E., Burkhart, K. M., & Pidruzny, J. N. (2009). The development of the Game Engagement Questionnaire: a measure of engagement in video game-playing. Journal of Experimental Social Psychology, 45, 624–634.
Caplan, S. E. (2002). Problematic Internet use and psychosocial well-being: development of a theory-based cognitive-behavioral measurement instrument. Computers in Human Behavior, 18, 553–575.
Credé, M. (2010). Random responding as a threat to the validity of effect size estimates in correlational research. Educational and Psychological Measurement, 70, 596–612.
Czincz, J., & Hechanova, R. (2009). Internet addiction: debating the diagnosis. Journal of Technology in Human Services, 27, 257–272.
Davis, R. A., Flett, G. L., & Besser, A. (2002). Validation of a new scale for measuring problematic Internet use: implications for pre-employment screening. CyberPsychology & Behavior, 5, 331–345.
Faust, K. A., Faust, D., Baker, A. M., & Meyer, J. F. (2012). Refining video game use questionnaires for research and clinical application: detection of problematic response sets. International Journal of Mental Health and Addiction. doi:10.1007/s11469-012-9390-5.
Gallen, R. T., & Berry, D. T. R. (1997). Partially random MMPI-2 protocols: when are they interpretable? Assessment, 4, 61–68.
Gentile, D. (2009). Pathological video-game use among youth ages 8 to 18: a national study. Psychological Science, 20, 594–602.
Kim, J., & Kim, H. (2012). Finding a key factor for Korean Adults’ Internet Addiction Diagnosis. International Journal of Advancements in Computing Technology, 4, 432–437.
King, D. L., Delfabbro, P. H., & Zajac, I. T. (2009). Preliminary validation of a new clinical tool for identifying problem video game playing. International Journal of Mental Health and Addiction, 1, 1–16.
Lemmens, J. S., Valkenberg, P. M., & Peter, J. (2009). Development and validation of a game addiction scale for adolescents. Media Psychology, 12, 77–95.
Meerkerk, G. J., Van Den Eijnden, R. J. J. M., Vermulst, A. A., & Garretsen, H. F. L. (2009). The Compulsive Internet Use Scale (CIUS): some psychometric properties. CyberPsychology & Behavior, 12, 1–6. doi:10.1089/cpb.2008.0181.
Morahan-Martin, J., & Schumacher, P. (2000). Incidence and correlates of pathological Internet use among college students. Computers in Human Behavior, 16, 13–29.
Pratarelli, M. E., Browne, B. L., & Johnson, K. (1999). The bits and bytes of computer/Internet addiction: a factor analytic approach. Behavior Research Methods, Instruments, & Computers, 31, 305–314.
Rogers, R. (2008). Current status of clinical methods. In R. Rogers (Ed.), Clinical assessment of malingering and deception (3rd ed., pp. 391–410). New York: Guilford Press.
Stein, L. A. R., & Rogers, R. (2008). Denial and misreporting of substance abuse. In R. Rogers (Ed.), Clinical assessment of malingering and deception (3rd ed., pp. 87–108). New York: Guilford Press.
Wang, W. (2001). Internet dependency and psychosocial maturity among college students. International Journal of Human-Computer Studies, 55, 919–938.
Weinstein, A., & Lejoyeux, M. (2010). Internet addiction or excessive Internet use. The American Journal of Drug and Alcohol Abuse, 36, 277–283. doi:10.3109/00952990.2010.491880.
Widyanto, L., Griffiths, M., Brunsden, V., & McMurran, M. (2008). The psychometric properties of the Internet Related Problem Scale: a pilot study. International Journal of Mental Health and Addiction, 6, 205–213.
Young, K. S. (2011). Internet addiction: A handbook and guide to evaluation and treatment. Hoboken: Wiley.
Zijlstra, W. P., van der Ark, L. A., & Sijtsma, K. (2011). Outliers in test and questionnaire data: can they be detected and should they be removed? Journal of Educational and Behavioral Statistics, 36, 186–212. doi:10.3102/1076998610366263.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Meyer, J.F., Faust, K.A., Faust, D. et al. Careless and Random Responding on Clinical and Research Measures in the Addictions: A Concerning Problem and Investigation of their Detection. Int J Ment Health Addiction 11, 292–306 (2013). https://doi.org/10.1007/s11469-012-9410-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11469-012-9410-5


