Advertisement

Comparing social desirability responding on world wide web and paper-administered surveys

  • Dawson R. Hancock
  • Claudia P. Flowers
Research

Abstract

Social desirability responding (SRD) on surveys administered on the World Wide Web (WWW) and on paper was examined using 178 graduate and undergraduate students randomly assigned to a 2 (World Wide Web and Paper) ×2 (Anonymous and Nonanonymous) true experimental design. The findings reveal no differences in SDR between the WWW and the paper-administered survey conditions, and no differences in SDR between the anonymous and nonanonymous conditions. These findings and potential explanations are examined for consideration by anyone interested in using the WWW to obtain accurate information from survey participants.

Keywords

Social Desirability Intrinsic Religiosity Class Attendance Extreme Response Style Scholastic Aptitude Test Score 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. American Association of Collegiater Registrars and Admissions Officers. (1984).A guide to postsecondary institutions for implementation of the Family educational rights and privacy act of 1974 as amended/prepared by Task force, the Family Education Rights and Privacy Act of 1974 “Buckley amendment.”Washington, DC:The Association.Google Scholar
  2. Berg, I.A. (1967). The deviation hypothesis: A broad statement of its assumptions and postulates. In I.A. Berg (ed.)Response set in personality assessment (pp. 146–190), Chicago: Aldine.Google Scholar
  3. Bernreuter, R.G. (1933). Validity in personality inventory.Personality Journal, 11, 383–386.Google Scholar
  4. Booth-Kewley, S., Edwards, J.E., & Rosenfeld, P. (1992). Impression management, social desirability, and computer administration of attitude questionnaires: Does the computer make a difference?Journal of Applied Psychology, 77(4), 562–566.CrossRefGoogle Scholar
  5. Booth-Kewley, S., Rosenfeld, P., & Edwards, J.E. (1993). Computer-assisted surveys in organizational settings: Alternatives, advantages, and applications. In P. Rosenfeld, J.E. Edwards, & M.D. Thomas (Eds.),Improving organizational surveys: New directions, methods, and applications (pp. 73–101). New-bury Park, CA Sage.Google Scholar
  6. Calsyn, R.J. (1999). Understanding and controlling response bias in needs assessment studies.Evaluation Review, 23(4), 399–418.CrossRefGoogle Scholar
  7. Calsyn, R.J., & Klinkenberg, W.D. (1995). Response bias in needs assessment studies.Evaluation Review, 19(1), 217–225.Google Scholar
  8. Clark, J.P., & Tifft, L.L. (1966). Polygraph and interview validation of self-reported deviant behavior.American Sociological Review, 31, 516–523.CrossRefGoogle Scholar
  9. Cronbach, L.J. (1946). Response sets and test validity.Educational and Psychological Measurement, 6, 475–494.Google Scholar
  10. Crowne, D.P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology.Journal of Consulting Psychology, 24, 349–354.CrossRefGoogle Scholar
  11. Dillehay, R.C., & Jernigan, L.R. (1970). The biased questionnaire as an instrument of opinion change.Journal of Personality and Social Psychology, 15, 144–150.CrossRefGoogle Scholar
  12. Doherty, L., & Thomas, M.D. (1986). Effects of an automated survey system upon responses. In O. Brown, Jr., & H.W. Hendrick (Eds.),Human factors in organizational design management II (pp. 157–161). North Holland, The Netherlands: Elsevier Science.Google Scholar
  13. Eisman, R. (1991). Big brother lives.Incentive, 165, 21–27.Google Scholar
  14. Feinstein, S. (1986, October 9). Computers replacing interviewers for personnel and marketing tasks.Wall Street Journal, p. 35.Google Scholar
  15. Feuer, D. (1986). Computerized testing: A revolution in the making.Training, 23, 80–86.Google Scholar
  16. Hamilton, D.L. 1968. Personality attributes associated with extreme response style.Psychological Bulletin, 69, 192–203.CrossRefGoogle Scholar
  17. Iadipaolo, D.M. (1992). Monster or monitor?Insurance and Technology, 17, 47–54.Google Scholar
  18. Jacobson, L.I., Kellogg, R.W., Cauce, A.M., & Slavin, R.S. (1977). A multidimensional social desirability inventory.Bulletin of the Psychonomic Society, 9, 109–110.Google Scholar
  19. Kiesler, S. & Sproull, L. (1986). Response effects in the electronic survey.Public Opinion Quarterly, 50, 401–413.CrossRefGoogle Scholar
  20. Knudson, D.D., Pope, H., & Irish, D.P. (1967). Response differences to questions on sexual standards.Public Opinion Quarterly, 31, 290–297.CrossRefGoogle Scholar
  21. Lautenschlager, G.J., & Flaherty, V.L. (1990). Computer administration of questions: More desirable or more social desirability?Journal of Applied Psychology, 75, 310–314.CrossRefGoogle Scholar
  22. Leak, G.K., & Fish, S. (1989). Religious orientation, impression management, and self deception: Toward a clarification of the link between religiosity and social desirability.Journal for Scientific Study of Religion, 28, 355–359.CrossRefGoogle Scholar
  23. Martin, C.L., & Nagao, D.H. (1989). Some effects of computerized interviewing on job applicant responses,Journal of Applied Psychology, 74, 72–80.CrossRefGoogle Scholar
  24. Meehl, P.E., & Hathaway, S.R. (1946). The K factor as a suppressor variable in the Minnesota Multiphasic Personality Inventory.Journal of Applied Psychology, 30, 525–564.CrossRefGoogle Scholar
  25. Mellor, S., Conroy, L., & Masteller, B.K. (1986). Comparative trait analysis of long-term recovering alcoholics.Psychological Reports, 58, 411–418.Google Scholar
  26. Moorman, R.H., & Podsakoff, P.M. (1992). A meta-analytic review and empirical test of the potential confounding effects of social desirability response sets in organizational behavior research.Journal of Occupational and Organizational Psychology, 65(3), 131–150.Google Scholar
  27. Paulhus, D.L. (1984). Two-component models of social desirability responding,Journal of Personality and Social Psychology, 46, 598–609.CrossRefGoogle Scholar
  28. Paulhus, D.L. (1991). Measurement and control of response bias. In J.P. Robinson, P.R. Shaver, & L.S. Wrightsman (Eds.),Measures of personality and social psychological attitudes (pp. 17–59). San Diego: Academic Press.Google Scholar
  29. Paulhus, D.L. (1993).Assessing self-deception and impression management in self-reports: The Balanced Inventory of Desirable Responding-Reference manual for Version 6. Unpublished manual.Google Scholar
  30. Paulhus, D.L., & Reid, D.B. (1991). Enhancement and denial in social desirability responding.Journal of Personality and Social Psychology, 60(2), 307–318.CrossRefGoogle Scholar
  31. Peabody, D. 1962. Two components in bipolar scales: Direction and extremeness.Psychological Review, 69, 65–73.CrossRefGoogle Scholar
  32. Phillips, D.L., & Clancey, K.J. (1972). Some effects of social desirability in survey studies.American Journal of Sociology, 77, 921–940.CrossRefGoogle Scholar
  33. Quinn, B.A. (1989).Religiousness and psychological wellbeing: An empirical investigation, Unpublished dissertation, Wayne State University, Detroit.Google Scholar
  34. Ray, J.J. (1983). Reviving the problem of acquiescent response bias.Journal of Social Psychology, 121, 81–96.CrossRefGoogle Scholar
  35. Rosenfeld, P., Booth-Kewley, S., Edwards, J.E., & Thomas, M.D. (1996). Responses on computer surveys: Impression management, social desirability, and the Big Brother Syndrone.Computers in Human Behavior, 12(2), 263–274.CrossRefGoogle Scholar
  36. Rosenfeld, P., Doherty, L., Vicino, S.M., Kantor, J., & Greaves, J. (1989). Attitude assessment in organizations: Testing three microcomputer-based survey systems.Journal of General Psychology, 116, 145–154.CrossRefGoogle Scholar
  37. Rosenfeld, P., Giacalone, R.A., & Riordan, C.A. (1995).Impression management in organizations: Theory, measurement, practice, London: Routledge.Google Scholar
  38. Sigall, H., & Page, R. (1971). Current stereotypes: A little fading, a little faking.Journal of Personality and Social Psychology 18, 247–255.CrossRefGoogle Scholar
  39. Sproull, H.L., & Kiesler, S. (1991). Computers, network, and work.Scientific American, 265, 116–123.CrossRefGoogle Scholar
  40. Vicino, S.M. (1989).Effects of computer versus traditional paper-and-pencil survey administration on response bias among self-monitors. Unpublished master’s thesis, San Diego State University.Google Scholar
  41. Wiseman, F. (1972). Methodological bias in public opinion surveys. Public Opinion Quarterly, 36, 105–108.CrossRefGoogle Scholar

Copyright information

© Association for Educational Communications and Technology 2001

Authors and Affiliations

  • Dawson R. Hancock
    • 1
  • Claudia P. Flowers
    • 1
  1. 1.Department of Educational Administration, Research, and Technology at The University of North Carolina at CharlotteCharlotteUSA

Personalised recommendations