Advertisement

Process Control and Quality Measures

  • Richard Valliant
  • Jill A. Dever
  • Frauke Kreuter
Chapter
Part of the Statistics for Social and Behavioral Sciences book series (SSBS)

Abstract

Key to a successful project is not only the mastery of the tools presented in previous chapters, and knowing which tool to use when, but also the monitoring of the actual process, as well as the careful documentation of the steps taken, and the possibility to replicate each of those steps. Well-planned projects are designed so that quality control is possible during the data collection process and that steps to improve quality can be taken before the end of the data collection period. Obviously, the specific quality control measures will vary by the type of project conducted. This chapter reviews a core set of tools that will be useful for almost all survey designs. While it is tempting to think that assurance of reproducibility and good documentation is only worth the effort for complex surveys that will be repeated, even the smallest survey “runs” better when the tools introduced here are used.

References

  1. AAPOR. (2017). Best practices for survey research. Tech. rep., The American Association for Public Opinion Research, Deerfield, IL, URL http://www.aapor.org/Standards-Ethics/Best-Practices.aspx
  2. Aitken A., Hörngren J., Jones N., Lewis D., Zilhão M. J. (2004). Handbook on improving quality by analysis of process variables. Tech. rep., European Union, Luxembourg, URL http://epp.eurostat.ec.europa.eu/portal/page/portal/quality/documents/HANDBOOK%20ON%20IMPROVING%20QUALITY.pdf
  3. Battaglia M. P., Dillman D. A., Frankel M. R., Harter R., Buskirk T. D., McPhee C. B., DeMatteis J. M., Yancey T. (2016). Sampling, data collection, and weighting procedures for address-based sample surveys. Journal of Survey Statistics and Methodology 4(4):476–500, URL https://doi.org/10.1093/jssam/smw025 CrossRefGoogle Scholar
  4. Bethlehem J., Cobben F., Schouten B. (2011). Handbook in Nonresponse in Household Surveys. John Wiley & Sons, Inc., New Jersey.CrossRefGoogle Scholar
  5. Biemer P. P., Lyberg L. (2003). Introduction to Survey Quality. John Wiley & Sons, Inc., New Jersey.CrossRefGoogle Scholar
  6. Blasius J., Thiessen V. (2012). Assessing the Quality of Survey Data. SAGE Publications Ltd., London.CrossRefGoogle Scholar
  7. Blom A. (2008). Measuring nonresponse cross-nationally. ISER Working Paper Series URL http://ideas.repec.org/p/ese/iserwp/2008-41.html, no. 2008-41.
  8. Defense Manpower Data Center. (2004). May 2004 Status of Forces Survey of Reserve component members: Administration, datasets, and codebook. Tech. Rep. No. 2004-013, Defense Manpower Data Center, Arlington, VA.Google Scholar
  9. DeMeyer A., Loch C. H., Pick M. T. (2002). Managing project uncertainty: From variation to chaos. MIT Sloan Management Review 30:60–67.Google Scholar
  10. Deming W. E. (1982). Out of the Crisis. Cambridge University Press, Cambridge.Google Scholar
  11. Durrant G. B., Steele F. (2009). Multilevel modelling of refusal and non-contact in household surveys: Evidence from six UK government surveys. Journal Of The Royal Statistical Society, Series A 172(2):361–381.MathSciNetCrossRefGoogle Scholar
  12. Eckman S., O’Muircheartaigh C. (2011). Performance of the half–open interval missed housing unit procedure. Survey Research Methods 5(3):125–131.Google Scholar
  13. Federal Committee on Statistical Methodology. (2017). Statistical Standards and Guidelines. URL https://fcsm.sites.usa.gov/policies/
  14. Groves R. M., Peytcheva E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly 72:167–189.CrossRefGoogle Scholar
  15. Hansen S., Benson G., Bowers A., Pennell B., Lin Y., Duffey B., Hu M., Hibben K. (2016). Cross-cultural survey guidelines. Tech. rep., Institute for Survey Research, University of Michigan, URL http://ccsg.isr.umich.edu/index.php/chapters/survey-quality-chapter
  16. HCAHPS. (2017). CAHPS hospital survey. Tech. rep., Hospital Consumer Assessment of Healthcare Providers and Systems, URL http://www.hcahpsonline.org
  17. Herzog T. N., Scheuren F. J., Winkler W. E. (2007). Data Quality and Record Linkage. Springer, New York.zbMATHGoogle Scholar
  18. Iannacchione V. G. (2011). Research synthesis: The changing role of address-based sampling in surveys. Public Opinion Quarterly 75(3):556–576.CrossRefGoogle Scholar
  19. International Organization for Standardization. (1985). Information processing – documentation symbols and conventions for data, program and system flowcharts, program network charts and system resources charts. Tech. rep., International Organization for Standardization, Geneva, Switzerland, URL http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=11955
  20. Jans M., Sirkis R., Morgan D. (2013). Managing data quality indicators with paradata-based statistical quality control tools. In: Kreuter F. (ed) Improving Surveys with Paradata: Making Use of Process Information. John Wiley & Sons, Inc., New York.Google Scholar
  21. Kirgis N., Lepkowski J. (2010). A management model for continuous data collection: Reflections from the National Survey of Family Growth, 2006–2010. NSFG Paper No 10-011 URL http://www.psc.isr.umich.edu/pubs/pdf/ng10-011.pdf
  22. Kohler U. (2007). Surveys from inside: An assessment of unit nonresponse bias with internal criteria. Survey Research Methods 1(2):55–67.Google Scholar
  23. Kohler U., Kreuter F. (2012). Data Analysis Using Stata, 3rd edn. StataPress, College Station, TX.zbMATHGoogle Scholar
  24. Kreuter F. (2002). Kriminalitätsfurcht: M essung und methodische P robleme. Leske and Budrich, Berlin.CrossRefGoogle Scholar
  25. Kreuter F., Couper M. P., Lyberg L. (2010). The use of paradata to monitor and manage survey data collection. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 282–296.Google Scholar
  26. Lepkowski J., Axinn W. G., Kirgis N., West B. T., Ndiaye S. K., Mosher W., Groves R. M. (2010). Use of paradata in a responsive design framework to manage a field data collection. NSFG Survey Methodology Working Papers (10-012), URL http://www.psc.isr.umich.edu/pubs/pdf/ng10-012.pdf
  27. Little R. J. A., Rubin D. B. (2002). Statistical Analysis with Missing Data. John Wiley & Sons, Inc., New Jersey.CrossRefGoogle Scholar
  28. Long J. S. (2009). The Workflow of Data Analysis Using Stata. StataPress, College Station, TX.Google Scholar
  29. Lyberg L., Biemer P. P., Collins M., de Leeuw E., Dippo C. S., Schwarz N., Trewin D. (1997). Survey Measurement and Process Quality. John Wiley & Sons, Inc., New York.CrossRefGoogle Scholar
  30. Morganstein D. R., Marker D. A. (1997). Continuous quality improvement in statistical agencies. In: Lyberg L., Biemer P. P., Collins M., de Leeuw E. D., Dippo C. S., Schwarz N., Trewin D. (eds) Survey Measurement and Process Quality. John Wiley & Sons, Inc., New York.Google Scholar
  31. Müller, G. (2011). Fieldwork monitoring in PASS. Tech. rep., Institut für Arbeitsmarkt und Berufsforschung, URL http://www.iab.de/de/veranstaltungen/konferenzen-und-workshops-2011/paradata.aspx
  32. National Center for Education Statistics. (2011). Technical report and user’s guide for the program for international student assessment (pisa). Tech. rep., US Department of Education, URL https://nces.ed.gov/surveys/pisa/pdf/2011025.pdf
  33. Olson K., Peytchev A. (2007). Effect of interviewer experience on interview pace and interviewer attitudes. Public Opinion Quarterly 71:273–286.CrossRefGoogle Scholar
  34. O’Muircheartaigh C., Campanelli P. (1998). The relative impact of interviewer effects and sample design effects on survey precision. Journal of the Royal Statistical Society, Series A 161(1):63–77.CrossRefGoogle Scholar
  35. O’Muircheartaigh C., Campanelli P. (1999). A multilevel exploration of the role of interviewers in survey non-response. Journal of the Royal Statistical Society, Series A 162(3):437–446.CrossRefGoogle Scholar
  36. Porter E. H., Winkler W. E. (1997). Approximate string comparison and its effect in an advanced record linkage system. In: Alvey W., Jamerson B. (eds) Record Linkage – 1997: Proceedings of an International Workshop and Exposition, U.S. Office of Management and Budget, pp 190–199.Google Scholar
  37. Rubin D. B. (1987). Multiple Imputation for Nonresponse in Surveys. John Wiley & Sons, New York.CrossRefGoogle Scholar
  38. Särndal C., Lundström S. (2008). Assessing auxiliary vectors for control of nonresponse bias in the calibration estimator. Journal of Official Statistics 24:167–191.Google Scholar
  39. Schnell R., Kreuter F. (2005). Separating interviewer and sampling-point effects. Journal of Official Statistics 21(3):389–410.Google Scholar
  40. Schnell R., Bachteler T., Bender S. (2004). A toolbox for record linkage. Austrian Journal of Statistics 33(1–2):125–133.Google Scholar
  41. Schouten B., Cobben F. (2007). R-indexes for the comparison of different fieldwork strategies and data collection modes. Tech. Rep. Discussion Paper 07002, Statistics Netherlands, Voorburg, The Netherlands, URL http://www.risq-project.eu/papers/schouten-cobben-2007-a.pdf
  42. Schouten B., Cobben F., Bethlehem J. (2009). Indicators for the representativeness of survey response. Survey Methodology 35(1):101–113.Google Scholar
  43. Shewhart W. A. (1931). Economic Control of Quality of Manufactured Product. Van Nostrand Reinhold Co., Princeton, NJ. Republished in 1981 by the American Society for Quality Control, Milwaukee, WI.Google Scholar
  44. Statistics Canada. (2009). Statistics Canada quality guidelines. Tech. rep., Statistics Canada, Ottawa CA, URL http://www5.statcan.gc.ca/olc-cel/olc.action?objId=12-539-X&objType=2&lang=en&limit=1
  45. Thomas B. (1999). Probabilistic record linkage software: A Statistics Canada evaluation of GRLS and Automatch. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 187–192Google Scholar
  46. Tufte E. (1990). Envisioning Information. Graphics Press, Cheshire, CT.Google Scholar
  47. Valliant R., Hubbard F., Lee S., Chang W. (2014). Efficient use of commercial lists in U.S. household sampling. Journal of Survey Statistics and Methodology 2:182–209.CrossRefGoogle Scholar
  48. Wagner J. (2010). The fraction of missing information as a tool for monitoring the quality of survey data. Public Opinion Quarterly 74(2):223–243.CrossRefGoogle Scholar
  49. West B. T., Groves R. M. (2013). A propensity-adjusted interviewer performance indicator. Public Opinion Quarterly 77:352–374.CrossRefGoogle Scholar
  50. West B. T., Olson K. (2010). How much of interviewer variance is really nonresponse error variance? Public Opinion Quarterly 74(5):1027–1045.CrossRefGoogle Scholar
  51. Willenborg L., Heerschap H. (2012). Matching. Tech. rep., Statistics Netherlands, The Hague, URL http://www.cbs.nl/NR/rdonlyres/0EDC70A4-C776-43F6-94AD-A173EFE58915/0/2012Matchingart.pdf, method Series no. 12.

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Richard Valliant
    • 1
    • 2
  • Jill A. Dever
    • 3
  • Frauke Kreuter
    • 2
    • 4
  1. 1.University of MichiganAnn ArborUSA
  2. 2.University of MarylandCollege ParkUSA
  3. 3.RTI InternationalWashington, DCUSA
  4. 4.University of MannheimMannheimGermany

Personalised recommendations