Advertisement

Process Control and Quality Measures

  • Richard Valliant
  • Jill A. Dever
  • Frauke Kreuter
Chapter
Part of the Statistics for Social and Behavioral Sciences book series (SSBS, volume 51)

Abstract

So far we have described a wide variety of tools and tasks necessary for sampling and weighting. Key to a successful project, however, is not only the mastery of the tools, and knowing which tool to use when, but also the monitoring of the actual process, as well as the careful documentation of the steps taken, and the possibility to replicate each of those steps. For any project, certain quality control measures should be taken prior to data collection during sample frame construction and sample selection and after data collection during editing, weight calculation, and database construction. Well-planned projects are designed so that quality control is possible during the data collection process and that steps to improve quality can be taken before the end of the data collection period. Obviously the specific quality control measures will vary by the type of project conducted. For example, repeated longitudinal data collection efforts allow comparisons to prior years, whereas one-time cross-sectional surveys often suffer from uncertainty with respect to procedures and outcomes. However, we have found a core set of tools to be useful for almost all survey designs and will introduce those in this chapter. We do want to emphasize that while it is tempting to think that assurance of reproducibility and good documentation is only worth the effort for complex surveys that will be repeated, in our experience, even the smallest survey “runs” better when the tools introduced here are used.

Keywords

Control Chart Critical Path Statistical Process Control Gantt Chart Task Number 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Aitken A., Hörngren J., Jones N., Lewis D., Zilhão M.J. (2004). Handbook on improving quality by analysis of process variables. Tech. rep., Luxembourg, URL http://epp.eurostat.ec.europa.eu/portal/page/portal/quality/documents/HANDBOOK%20ON%20IMPROVING%20QUALITY.pdf
  2. Bethlehem J., Cobben F., Schouten B. (2011). Handbook in Nonresponse in Household Surveys. John Wiley & Sons, Inc., New JerseyCrossRefGoogle Scholar
  3. Biemer P., Lyberg L. (2003). Introduction to Survey Quality. John Wiley & Sons, Inc., New JerseyCrossRefGoogle Scholar
  4. Blasius J., Thiessen V. (2012). Assessing the Quality of Survey Data. SAGE Publications Ltd., LondonGoogle Scholar
  5. Blom A. (2008). Measuring nonresponse cross-nationally. ISER Working Paper Series URL http://ideas.repec.org/p/ese/iserwp/2008-41.html, no. 2008–41
  6. Canada S. (2009). Statistics Canada quality guidelines. Tech. rep., Ottawa CAGoogle Scholar
  7. Defense Manpower Data Center (2004). May 2004 Status of Forces Survey of Reserve component members: Administration, datasets, and codebook. Tech. Rep. No. 2004–013, Defense Manpower Data Center, Arlington, VAGoogle Scholar
  8. DeMeyer A., Loch C.H., Pick M.T. (2002). Managing project uncertainty: From variation to chaos. MIT Sloan Management Review 30:60–67Google Scholar
  9. Deming W.E. (1982). Out of the Crisis. Cambridge University Press, CambridgeGoogle Scholar
  10. Durrant G.B., Steele F. (2009). Multilevel modelling of refusal and non-contact in household surveys: Evidence from six UK government surveys. Journal Of The Royal Statistical Society, Series A 172(2):361–381MathSciNetGoogle Scholar
  11. Eckman S., O’Muircheartaigh C. (2011). Performance of the half–open interval missed housing unit procedure. Survey Research Methods 5(3):125–131Google Scholar
  12. Groves R.M., Peytcheva E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly 72:167–189CrossRefGoogle Scholar
  13. Herzog T.N., Scheuren F.J., Winkler W.E. (2007). Data Quality and Record Linkage. Springer, New YorkGoogle Scholar
  14. Iannacchione V.G. (2011). Research synthesis: The changing role of address-based sampling in surveys. Public Opinion Quarterly 75(3):556–576CrossRefGoogle Scholar
  15. International Organization for Standardization (1985). Information processing – documentation symbols and conventions for data, program and system flowcharts, program network charts and system resources charts. Tech. rep., Geneva, Switzerland, URL http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=11955
  16. Jans M., Sirkis R., Morgan D. (2013). Managing data quality indicators with paradata-based statistical quality control tools. In: Kreuter F. (ed) Improving Surveys with Paradata: Making use of Process Information, John Wiley & Sons, Inc., New YorkGoogle Scholar
  17. Kirgis N., Lepkowski J. (2010). A management model for continuous data collection: Reflections from the National Survey of Family Growth, 2006–2010. NSFG Paper No 10–011 URL http://www.psc.isr.umich.edu/pubs/pdf/ng10-011.pdf
  18. Kohler U. (2007). Surveys from inside: An assessment of unit nonresponse bias with internal criteria. Survey Research Methods 1(2):55–67Google Scholar
  19. Kohler U., Kreuter F. (2012). Data Analysis using Stata, 3rd edn. StataPress, College Station TXGoogle Scholar
  20. Kreuter F. (2002) Kriminalitätsfurcht: Messung und methodische Probleme. Leske and Budrich, BerlinGoogle Scholar
  21. Kreuter F., Couper M., Lyberg L. (2010). The use of paradata to monitor and manage survey data collection. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 282–296Google Scholar
  22. Lepkowski J., Axinn W.G., Kirgis N., West B.T., Ndiaye S.K., Mosher W., Groves R.M. (2010). Use of paradata in a responsive design framework to manage a field data collection. NSFG Survey Methodology Working Papers (10–012), URL http://www.psc.isr.umich.edu/pubs/pdf/ng10-012.pdf
  23. Little R.J.A., Rubin D.B. (2002). Statistical Analysis with Missing Data. John Wiley & Sons, Inc., New JerseyGoogle Scholar
  24. Long J.S. (2009). The Workflow of Data Analysis Using Stata. StataPress, College Station TXGoogle Scholar
  25. Lyberg L., Biemer P., Collins M., de Leeuw E., Dippo C.S., Schwarz N., Trewin D. (1997). Survey Measurement and Process Quality. John Wiley & Sons, Inc., New YorkCrossRefGoogle Scholar
  26. Morganstein D.R., Marker D.A. (1997). Continuous quality improvement in statistical agencies. In: Lyberg L., Biemer P., Collins M., De Leeuw E.D., Dippo C.S., Schwarz N., Trewin D. (eds) Survey Measurement and Process Quality, John Wiley & Sons, Inc., New YorkGoogle Scholar
  27. Müller G. (2011). Fieldwork monitoring in pass. Tech. rep., Institut für Arbeitsmarkt und Berufsforschung, URL http://www.iab.de/de/veranstaltungen/konferenzen-und-workshops-2011/paradata.aspx
  28. Olson K., Peytchev A. (2007). Effect of interviewer experience on interview pace and interviewer attitudes. Public Opinion Quarterly 71:273–286CrossRefGoogle Scholar
  29. O’Muircheartaigh C., Campanelli P. (1998). The relative impact of interviewer effects and sample design effects on survey precision. Journal of the Royal Statistical Society, Series A 161(1):63–77Google Scholar
  30. O’Muircheartaigh C., Campanelli P. (1999). A multilevel exploration of the role of interviewers in survey non-response. Journal of the Royal Statistical Society, Series A 162(3):437–446Google Scholar
  31. Porter E.H., Winkler W.E. (1997). Approximate string comparison and its effect in an advanced record linkage system. In: Alvey W., Jamerson B. (eds) Record Linkage – 1997: Proceedings of an International Workshop and Exposition, U.S. Office of Management and Budget, pp 190–199Google Scholar
  32. Rubin D.B. (1987). Multiple Imputation for Nonresponse in Surveys. John Wiley & Sons, New YorkCrossRefGoogle Scholar
  33. Särndal C., Lundström S. (2008). Assessing auxiliary vectors for control of nonresponse bias in the calibration estimator. Journal of Official Statistics 24:167–191Google Scholar
  34. Schnell R., Kreuter F. (2005). Separating interviewer and sampling-point effects. Journal of Official Statistics 21(3):389–410Google Scholar
  35. Schnell R., Bachteler T., Bender S. (2004). A toolbox for record linkage. Austrian Journal of Statistics 33(1–2):125–133Google Scholar
  36. Schouten B., Cobben F. (2007). R-indexes for the comparison of different fieldwork strategies and data collection modes. Tech. Rep. Discussion Paper 07002, Voorburg, The Netherlands, URL http://www.risq-project.eu/papers/schouten-cobben-2007-a.pdf
  37. Schouten B., Cobben F., Bethlehem J. (2009). Indicators for the representativeness of survey response. Survey Methodology 35(1):101–113Google Scholar
  38. Shewhart W.A. (1931). Economic Control of Quality of Manufactured Product. Van Nostrand Reinhold Co., Princeton, NJ, republished in 1981 by the American Society for Quality Control, Milwaukee, WIGoogle Scholar
  39. Thomas B. (1999). Probabilistic record linkage software: A Statistics Canada evaluation of GRLS and Automatch. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp 187–192Google Scholar
  40. Tufte E. (1990). Envisioning Information. Graphics Press, Cheshire CTGoogle Scholar
  41. Wagner J. (2010). The fraction of missing information as a tool for monitoring the quality of survey data. Public Opinion Quarterly 74(2):223–243CrossRefGoogle Scholar
  42. West B.T., Groves R.M. (2013). A propensity-adjusted interviewer performance indicator. Public Opinion Quarterly 77:to be publishedGoogle Scholar
  43. West B.T., Olson K. (2010). How much of interviewer variance is really nonresponse error variance? Public Opinion Quarterly 74(5):1027–1045CrossRefGoogle Scholar
  44. Willenborg L., Heerschap H. (2012). Matching. Tech. rep., The Hague, URL http://www.cbs.nl/NR/rdonlyres/0EDC70A4-C776-43F6-94AD-A173EFE58915/0/2012Matchingart.pdf, method Series no. 12

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Richard Valliant
    • 1
  • Jill A. Dever
    • 2
  • Frauke Kreuter
    • 3
  1. 1.University of MichiganAnn ArborUSA
  2. 2.RTI InternationalWashington, DCUSA
  3. 3.University of MarylandCollege ParkUSA

Personalised recommendations