Skip to main content

Getting the Most Out of Paradata

  • Chapter
  • First Online:
The Palgrave Handbook of Survey Research

Abstract

Survey paradata are data that are captured about the context and processes surrounding survey data collection. Many types of paradata are automatically produced and collected, primarily by the software systems used for computerized interviewing, but little is known about best practices for how the range of these available paradata should be used. This chapter identifies some of the most commonly used forms of paradata, such as response times and call record data, and discusses how they have been studied and used in practice. By understanding how these types of paradata are used effectively, researchers may be able to identify new uses for other forms of paradata. The chapter also proposes a number of areas for future research that will maximize the value of paradata for researchers moving forward.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

References and Further Reading

  • Bassili, J. N., & Scott, B. S. (1996). Response Latency as a Signal to Question Problems in Survey Research. Public Opinion Quarterly, 60(3), 390–399.

    Article  Google Scholar 

  • Bates, N. A. (2003). Contact Histories in Personal Visit Surveys: the Survey of Income and Program Participation (SIPP) Methods Panel. Demographic Surveys Division.

    Google Scholar 

  • Bates, N. A., & Piani, A. (2005). Participation in the National Health Interview Survey: Exploring reasons for reluctance using contact history process data.

    Google Scholar 

  • Beaumont, J.-F. (2005). Calibrated Imputation in Surveys under a Quasi-Model-Assisted Approach. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 67(3), 445–458.

    Article  Google Scholar 

  • Biemer, P. P., & Link, M. W. (2007). Evaluating and Modeling Early Cooperator Effects in RDD Surveys. In Advances in Telephone Survey Methodology (pp. 587–617). Hoboken, NJ, USA: John Wiley & Sons, Inc. http://doi.org/10.1002/9780470173404.ch26

  • Campanelli, P., Sturgis, P., & Purdon, S. (1997). Can you hear me knocking? and investigation into the impact of interviewers on survey response rates.

    Google Scholar 

  • Conrad, F. G., Schober, M. F., & Coiner, T. (2007). Bringing features of human dialogue to web surveys. Applied Cognitive Psychology, 21(2), 165–187. http://doi.org/10.1002/acp.1335

    Article  Google Scholar 

  • Conrad, F., Tourangeau, R., Couper, M. P., & Zhang C. (2017). Reducing speeding in web surveys by providing immediate feedback. Survey Research Methods, 11(1), 45–61.

    Google Scholar 

  • Couper, M. P. (1998). Measuring survey quality in a CASIC environment. In Proceedings of the Section on Survey Research Methods of the American Statistical Association.

    Google Scholar 

  • Couper, M. P., & Kreuter, F. (2012). Using paradata to explore item level response times in surveys. Journal of the Royal Statistical Society: Series a (Statistics in Society), 176(1), 271–286. http://doi.org/10.1111/j.1467-985X.2012.01041.x

    Article  Google Scholar 

  • Draisma, S., & Dijkstra, W. (2004). Response latency and (para) linguistic expressions as indicators of response error. In S. Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. A. Martin, J. Martin, & E. Singer (Eds.), Methods for Testing and Evaluating Survey Questionnaires. Hoboken, NJ: John Wiley & Sons.

    Google Scholar 

  • Durrant, G. B., D’Arrigo, J., & Steele, F. (2011). Using paradata to predict best times of contact, conditioning on household and interviewer influences. Journal of the Royal Statistical Society: Series a (Statistics in Society), 174(4), 1029–1049. http://doi.org/10.1111/j.1467-985X.2011.00715.x

    Article  Google Scholar 

  • Greenberg, B. S., & Stokes, S. L. (1990). Developing an optimal call scheduling strategy for a telephone survey. Journal of Official Statistics, 6(4), 421–435.

    Google Scholar 

  • Groves, R. M., & Couper, M. P. (1996). Contact-level influences on cooperation in face-to-face surveys. Journal of Official Statistics, 12 (1), 63–83.

    Google Scholar 

  • Groves R. M., Heeringa, S. G. (2006). Responsive design for household surveys: tools for actively controlling survey errors and costs. Journal of the Royal Statistical Society: Series A (Statistics in Society), 169(3), 439–457.

    Google Scholar 

  • Johnson, T. P., Parker, V., & Clements, C. (2001). Detection and prevention of data falsification in survey research. Survey Research, 32(3), 1–2.

    Google Scholar 

  • Kalton, G. (1983). Models in the Practice of Survey Sampling. International Statistical Review / Revue Internationale De Statistique, 51(2), 175–188.

    Google Scholar 

  • Krueger, B. S., West, B. T. (2014). Assessing the Potential of Paradata and Other Auxiliary Data for Nonresponse Adjustments. Public Opinion Quarterly, 78(4), 795-831

    Google Scholar 

  • Laflamme, F. (2008). Understanding Survey Data Collection Through the Use of Paradata at Statistics Canada. Proceedings of the American Statistical Association.

    Google Scholar 

  • Olson, K., & Peytchev, A. (2007). Effect of Interviewer Experience on Interview Pace and Interviewer Attitudes. Public Opinion Quarterly, 71(2), 273–286. Retrieved from http://poq.oxfordjournals.org.ezproxy.stanford.edu/content/71/2/273.short

    Article  Google Scholar 

  • Penne, M. A., Snodgrass, J. J., & Barker, P. (2002). Analyzing audit trails in the National Survey on Drug Use and Health (NSDUH): means for maintaining and improving data quality. Presented at the International Conference on Questionnaire Development, Evaluation, and Testing Methods, Charleston, SC.

    Google Scholar 

  • Politz, A., & Simmons, W. (1949). An Attempt to Get the “Not at Homes” Into the Sample without Callbacks. Journal of the American Statistical Association, 44(245), 9–16.

    Google Scholar 

  • Tourangeau, R., Couper, M. P., & Conrad, F. G. (2004). Spacing, Position, and Order: Interpretive Heuristics for Visual Features of Survey Questions. Public Opinion Quarterly, 68(3), 368–393.

    Article  Google Scholar 

  • Yan, T. & Tourangeau, R. (2008). Fast times and easy questions: the effects of age, experience and question complexity on web survey response times. Applied Cognitive Psychology, 22(1), 51–68.

    Google Scholar 

  • Wagner, J. (2013). Adaptive contact strategies in telephone and face-to-face surveys. Survey Research Methods, 7, 45–55.

    Google Scholar 

  • Weeks, M. F., Jones, B. L., Folsom, R. E., Jr, & Benrud, C. H. (1980). Optimal Times to Contact Sample Households. Public Opinion Quarterly, 44(1), 101–114.

    Article  Google Scholar 

  • West, B.T., Kreuter, F., & Trappmann, M. (2014). Is the Collection of Interviewer Observations Worthwhile in an Economic Panel Survey? New Evidence from the German Labor Market and Social Security (PASS) Study. Journal of Survey Statistics and Methodology, 2(2), 159–181.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frauke Kreuter .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Cite this chapter

Kreuter, F. (2018). Getting the Most Out of Paradata. In: Vannette, D., Krosnick, J. (eds) The Palgrave Handbook of Survey Research . Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-54395-6_24

Download citation

Publish with us

Policies and ethics