Advertisement

Mixed-Mode Surveys and Data Quality

Meta-Analytic Evidence and Avenues for Future Research
  • Michael Bosnjak
Chapter
Part of the Schriftenreihe der ASI - Arbeitsgemeinschaft Sozialwissenschaftlicher Institute book series (SASI)

Zusammenfassung

It has often been noted that survey methodology is inherently pragmatically oriented (e.g., Bosnjak and Danner 2015, p. 309; Goyder 1987, p. 11): How to sample and recruit respondents to reduce coverage and sampling errors, how to operationalize concepts to reduce measurement error, and how to minimize the differences between those who responded from those who did not on all variables of interest (i.e., nonresponse bias) are generic guiding questions in survey methodology (Dillman et al. 2014; Groves et al. 2011).

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. Biemer, P.P. (2010). Total survey error: Design, implementation, and evaluation. Public Opinion Quarterly, 74(5), 817-848.Google Scholar
  2. Blom, A.G., Bosnjak, M., Cornilleau, A., Cousteaux, A.-S., Das, M., Douhou, S., & Krieger, U. (2016). A comparison of four probability-based online and mixed-mode panels in Europe. Social Science Computer Review, 34(1), 8-25.Google Scholar
  3. Bosnjak, M., & Danner, D. (2015). Survey participation and response. Psihologija, 48(4), 307-310.Google Scholar
  4. Cook, C., Heath, F., & Thompson, R.L. (2000). A meta-analysis of response rates in web-or internet-based surveys. Educational and psychological measurement, 60(6), 821-836.Google Scholar
  5. Bosnjak, M., Das, M., & Lynn, P. (2016). Methods for probability-based online and mixed-mode panels: Recent trends and future perspectives. Social Science Computer Review, 34(1), 3-7.Google Scholar
  6. De Leeuw, E.D. (1992). Data Quality in Mail, Telephone and Face to Face Surveys. TT Publikaties, Plantage Daklaan 40, 1018CN Amsterdam.Google Scholar
  7. De Leeuw, E. D. (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics, 21(2), 233-255.Google Scholar
  8. De Leeuw, E. D., & van der Zouwen, J. (1988). Data quality in face to face interviews: A comparative meta-analysis. Telephone survey methodology. Russell Sage Foundation, New York.Google Scholar
  9. De Leeuw, E., Callegaro, M., Hox, J., Korendijk, E., & Lensvelt-Mulders, G. (2007). The influence of advance letters on response in telephone surveys a meta-analysis. Public Opinion Quarterly, 71(3), 413-443.Google Scholar
  10. Dillman, D.A. & Messer, B.L. (2010). Mixed-mode survey. In J.D. Wright & P.V. Marsden (Eds.), Handbook of Survey Research (2nd edition) (pp. 551-574). San Diego, CA: Elsevier.Google Scholar
  11. Dillman, D.A., Smyth, J.D., & Christian, L.M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. John Wiley & Sons.Google Scholar
  12. Dodou, D., & De Winter, J.C.F. (2014). Social desirability is the same in offline, online, and paper surveys: A meta-analysis. Computers in Human Behavior, 36, 487-495.Google Scholar
  13. Dwight, S.A., & Feigelson, M.E. (2000). A quantitative review of the effect of computerized testing on the measurement of social desirability. Educational and Psychological Measurement, 60(3), 340-360.Google Scholar
  14. Dybä, T., Kitchenham, B.A., & Jorgensen, M. (2005). Evidence-based software engineering for practitioners. Software, IEEE, 22(1), 58-65.Google Scholar
  15. Farrington, D.P., MacKenzie, D.L., Sherman, L.W., & Welsh, B.C. (Eds.). (2003). Evidence-based crime prevention. Routledge.Google Scholar
  16. Gnambs, T., & Kaspar, K. (2015). Disclosure of sensitive behaviors across selfadministered survey modes: A meta-analysis. Behavior research methods, 47(4), 1237-1259.Google Scholar
  17. Gnambs, T., & Kaspar, K. (2016). Socially Desirable Responding in Web-Based Questionnaires A Meta-Analytic Review of the Candor Hypothesis. Assessment, Online first: http://dx.doi.org/10.1177/1073191115624547
  18. Goyder, J. (1985). Face-to-face interviews and mailed questionnaires: The net difference in response rate. Public Opinion Quarterly, 49(2), 234-252.Google Scholar
  19. Goyder, J. (1987). The silent minority: Nonrespondents on sample surveys. Westview Press.Google Scholar
  20. Groves, R.M. (2004). Survey errors and survey costs. Hoboken, NJ: John Wiley & Sons.Google Scholar
  21. Green, K.E., & Hutchinson, S.R. (1996). Reviewing the Research on Mail Survey Response Rates: Meta-Analysis.Google Scholar
  22. Groves, R.M., Fowler Jr, F.J., Couper, M.P., Lepkowski, J.M., Singer, E., & Tourangeau, R. (2011). Survey methodology (Vol. 561). John Wiley & Sons.Google Scholar
  23. Groves, R.M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias. A meta-analysis. Public Opinion Quarterly, 72(2), 167-189.Google Scholar
  24. Hox, J.J., & De Leeuw, E.D. (1994). A comparison of nonresponse in mail, telephone, and face-to-face surveys. Quality and Quantity, 28(4), 329-344.Google Scholar
  25. Lozar Manfreda, K.L., Bosnjak, M., Berzelak, J., Haas, I., Vehovar, V., & Berzelak, N. (2008). Web surveys versus other survey modes: A meta-analysis comparing response rates. Journal of the Market Research Society, 50(1), 79.Google Scholar
  26. Mavletova, A. & Couper, M.P. (2015). A meta-analysis of breakoff rates in mobile web surveys. In D. Toninelli, R. Pinter &, P. de Pedraza (Eds.), Mobile research methods. Opportunities and challenges of mobile research methodologies (pp. 81-98). London, UK: Ubiquity Press.Google Scholar
  27. Medway, R. L., & Fulton, J. (2012). When more gets you less: a meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly, 76(4), 733-746.Google Scholar
  28. Mercer, A., Caporaso, A., Cantor, D., & Townsend, R. (2015). How Much Gets You How Much? Monetary Incentives and Response Rates in Household Surveys. Public Opinion Quarterly, 79(1), 105-129.Google Scholar
  29. Pring, R., & Thomas, G. (2004). Evidence-based practice in education. McGraw-Hill Education (UK).Google Scholar
  30. Richman, W.L., Kiesler, S., Weisband, S., & Drasgow, F. (1999). A meta-analytic study of social desirability distortion in computer-administered questionnaires, traditional questionnaires, and interviews. Journal of Applied Psychology, 84(5), 754.Google Scholar
  31. Rousseau, D.M. (2012). The Oxford handbook of evidence-based management. Oxford University Press.Google Scholar
  32. Sackett, D.L., Rosenberg, W.M., Gray, J.M., Haynes, R.B., & Richardson, W.S. (1996). Evidence based medicine: What it is and what it isn’t. British Medical Journal, 312(7023), 71-72.Google Scholar
  33. Shih, T.H., & Fan, X. (2007). Response rates and mode preferences in web-mail mixed-mode surveys: a meta-analysis. International Journal of Internet Science, 2(1), 59-82.Google Scholar
  34. Shih, T.H., & Fan, X. (2008). Comparing response rates from web and mail surveys: A meta-analysis. Field methods, 20(3), 249-271.Google Scholar
  35. Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859.Google Scholar
  36. Weisband, S., & Kiesler, S. (1996). Self disclosure on computer forms: Meta-analysis and implications. Proceedings of the ACM CHI 96 Human Factors in Computing Systems Conference April 14-18, 1996, Vancouver, Canada, pp. 3-10. http://www.sigchi.org/chi96/proceedings/papers/Weisband/sw_txt.htm
  37. Ye, C., Fulton, J., & Tourangeau, R. (2011). More positive or more extreme? A meta-analysis of mode differences in response choice. Public Opinion Quarterly, 75(2), 349-365.Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden GmbH 2017

Authors and Affiliations

  1. 1.GESIS – Leibniz Institute for the Social SciencesUniversity of MannheimMannheimDeutschland

Personalised recommendations