, Volume 49, Issue 4, pp 1499–1519 | Cite as

Panel Conditioning in Longitudinal Studies: Evidence From Labor Force Items in the Current Population Survey

  • Andrew Halpern-MannersEmail author
  • John Robert Warren


Does participating in a longitudinal survey affect respondents’ answers to subsequent questions about their labor force characteristics? In this article, we investigate the magnitude of panel conditioning or time-in-survey biases for key labor force questions in the monthly Current Population Survey (CPS). Using linked CPS records for household heads first interviewed between January 2007 and June 2010, our analyses are based on strategic within-person comparisons across survey months and between-person comparisons across CPS rotation groups. We find considerable evidence for panel conditioning effects in the CPS. Panel conditioning downwardly biases the CPS-based unemployment rate, mainly by leading people to remove themselves from its denominator. Across surveys, CPS respondents (claim to) leave the labor force in greater numbers than otherwise equivalent respondents who are participating in the CPS for the first time. The results cannot be attributed to panel attrition or mode effects. We discuss implications for CPS-based research and policy as well as for survey methodology more broadly.


Panel conditioning Survey research methods Labor force 



Order of authorship is alphabetical to reflect equal contributions by the authors. This article was inspired by a conversation with Michael Hout, and was originally prepared for presentation at the April 2010 annual meetings of the Population Association of America. The National Science Foundation (SES-0647710) and the University of Minnesota’s Life Course Center, Department of Sociology, College of Liberal Arts, and Minnesota Population Center have provided support for this project. We warmly thank Eric Grodsky, Ross Macmillan, Gregory Weyland, anonymous reviewers, and workshop participants at the University of Minnesota, the University of Texas, the University of Wisconsin-Madison, and New York University for their constructive criticism and comments. Finally, we thank Anne Polivka, Dorinda Allard, and Steve Miller at the U.S. Bureau of Labor Statistics for their helpful feedback. However, all errors or omissions are the authors’ responsibility.


  1. Anderson, B. A., Silver, B. D., & Abramson, P. R. (1988). The effects of race of the interviewer on measures of electoral participation by blacks in SRC national election studies. Public Opinion Quarterly, 52, 53–83.CrossRefGoogle Scholar
  2. Bailar, B. A. (1975). Effects of rotation group bias on estimates from panel surveys. Journal of the American Statistical Association, 70(349), 23–30.CrossRefGoogle Scholar
  3. Bailar, B. A. (1989). Information needs, surveys, and measurement errors. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 1–24). New York: Wiley.Google Scholar
  4. Bartels, L. M. (1999). Panel effects in the American national election studies. Political Analysis, 8, 1–20.CrossRefGoogle Scholar
  5. Battaglia, M. P., Zell, E., & Ching, P. (1996). Can participating in a panel sample introduce bias into trend estimates? (National Immunization Survey Working Paper). Washington, DC: National Cenrter for Health Statistics. Retrieved from
  6. Borle, S., Dholakia, U. M., Singh, S. S., & Westbrook, R. A. (2007). The impact of survey participation on subsequent customer behavior: an empirical investigation. Marketing Science, 26, 711–726.CrossRefGoogle Scholar
  7. Bridge, R. G., Reeder, L. G., Kanouse, D., Kinder, D. R., Nagy, V. T., & Judd, C. M. (1977). Interviewing changes attitudes—Sometimes. Public Opinion Quarterly, 41, 56–64.CrossRefGoogle Scholar
  8. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally.Google Scholar
  9. Cantor, D. (2008). A review and summary of studies on panel conditioning. In S. Menard (Ed.), Handbook of longitudinal research: Design, measurement, and analysis (pp. 123–138). Burlington, MA: Academic Press.Google Scholar
  10. Chandon, P., Morwitz, V. G., & Reinartz, W. J. (2004). The short- and long-term effects of measuring intent to repurchase. Journal of Consumer Research, 31, 566–572.CrossRefGoogle Scholar
  11. Chandon, P., Morwitz, V. G., & Reinartz, W. J. (2005). Do intentions really predict behavior? Self-generated validity effects in survey research. Journal of Marketing, 69(2), 1–14.CrossRefGoogle Scholar
  12. Clausen, A. R. (1968). Response validity: Vote report. Public Opinion Quarterly, 32, 588–606.CrossRefGoogle Scholar
  13. Das, M., Toepoel, V., & van Soest, A. (2007). Can I use a panel? Panel conditioning and attrition bias in panel surveys (CentER Discussion Paper Series No. 2007-56). Tilburg, The Netherlands: Tilburg University CentER.Google Scholar
  14. Das, M., Toepoel, V., & van Soest, A. (2011). Nonparametric tests of panel conditioning and attrition bias in panel surveys. Sociological Methods & Research, 40, 32–56.CrossRefGoogle Scholar
  15. De Amici, D., Klersy, C., Ramajoli, F., Brustia, L., & Politi, P. (2000). Impact of the Hawthorne effect in a longitudinal clinical study: The case of anesthesia. Controlled Clinical Trials, 21, 103–114.CrossRefGoogle Scholar
  16. Dholakia, U. M., & Morwitz, V. G. (2002). The scope and persistence of mere-measurement effects: Evidence from a field study of customer satisfaction measurement. Journal of Consumer Research, 29, 159–167.CrossRefGoogle Scholar
  17. Duan, N., Alegria, M., Canino, G., McGuire, T. G., & Takeuchi, D. (2007). Survey conditioning in self-reported mental health service use: Randomized comparison of alternative instrument formats. Health Services Research, 42, 890–907.CrossRefGoogle Scholar
  18. Fazio, R. H. (1989). On the power and functionality of attitudes: The role of attitude accessibility. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 153–179). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  19. Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, E. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50, 229–238.CrossRefGoogle Scholar
  20. Feldman, J. M., & Lynch, J. G. (1988). Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior. Journal of Applied Psychology, 73, 421–435.CrossRefGoogle Scholar
  21. Fitzsimons, G. J., & Moore, S. G. (2008). Should we ask our children about sex, drugs and rock & roll? Potentially harmful effects of asking questions about risky behaviors. Journal of Consumer Psychology, 18, 82–95.CrossRefGoogle Scholar
  22. Fitzsimons, G. J., Nunes, J. C., & Williams, P. (2007). License to sin: The liberating role of reporting expectations. Journal of Consumer Research, 34(1), 22–31.CrossRefGoogle Scholar
  23. Fitzsimons, G. J., & Williams, P. (2000). Asking questions can change choice behavior: Does it do so automatically or effortfully? Journal of Experimental Psychology-Applied, 6, 195–206.CrossRefGoogle Scholar
  24. Godin, G., Sheeran, P., Conner, M., & Germain, M. (2008). Asking questions changes behavior: Mere measurement effects on frequency of blood donation. Health Psychology, 27, 179–184.CrossRefGoogle Scholar
  25. Granberg, D., & Holmberg, S. (1992). The Hawthorne effect in election studies—The impact of survey participation on voting. British Journal of Political Science, 22, 240–247.CrossRefGoogle Scholar
  26. Greenwald, A. G., Carnot, C. G., Beach, R., & Young, B. (1987). Increasing voting-behavior by asking people if they expect to vote. Journal of Applied Psychology, 72, 315–318.CrossRefGoogle Scholar
  27. Hansen, M. H., Hurwitz, W. N., Nisselson, H., & Steinberg, J. (1955). The redesign of the census current population survey. Journal of the American Statistical Association, 50, 701–719.CrossRefGoogle Scholar
  28. Hernandez, L. M., Durch, J. S., Blazer, D. G., & Hoverman, I. V. (1999). Gulf War veterans: Measuring health. Committee on Measuring the Health of Gulf War Veterans, Division of Health Promotion and Disease Prevention Institute of Medicine. Washington, DC: National Academies Press.Google Scholar
  29. Holt, D. (1989). Panel conditioning: Discussion. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 340–347). New York: Wiley.Google Scholar
  30. Janiszewski, C., & Chandon, E. (2007). Transfer-appropriate processing response fluency, and the mere measurement effect. Journal of Marketing Research, 44, 309–323.CrossRefGoogle Scholar
  31. Jensen, P. S., Watanabe, H. K., & Richters, J. E. (1999). Who’s up first? Testing for order effects in structured interviews using a counterbalanced experimental design. Journal of Abnormal Child Psychology, 27, 439–445.CrossRefGoogle Scholar
  32. Kalton, G., & Citro, C. F. (2000). Panel surveys: Adding the fourth dimension. In D. Rose (Ed.), Researching social and economic change (pp. 36–53). London, UK: Routledge.Google Scholar
  33. Kessler, R. C., Wittchen, H.-U., Abelson, J . A., McGonagle, K., Schwarz, N., Kendler, K. S., . . . Zhao, S. (1988). Methodological studies of the Composite International Diagnostic Interview (CIDI) in the US National Comorbidity Survey (NCS). International Journal of Methods in Psychiatric Research, 7, 33–55.Google Scholar
  34. Kraut, R. E., & McConahay, J. B. (1973). How being interviewed affects voting: An experiment. Public Opinion Quarterly, 37, 398–406.CrossRefGoogle Scholar
  35. Landsberger, H. A. (1958). Hawthorne revisited. Ithaca, NY: Cornell University.Google Scholar
  36. Levav, J., & Fitzsimons, G. J. (2006). When questions change behavior—The role of ease of representation. Psychological Science, 17, 207–213.CrossRefGoogle Scholar
  37. Lucas, C. P., Fisher, P., Piacentini, J., Zhang, H., Jensen, P. S., Shaffer, D., . . . Canino, G. (1999). Features of interview questions associated with attenuation of symptom reports. Journal of Abnormal Child Psychology, 27, 429–437.Google Scholar
  38. Mathiowetz, N. A., & Lair, T. J. (1994). Getting better? Change or error in the measurement of functional limitations. Journal of Economic and Social Measurement, 20, 237–262.Google Scholar
  39. McCormick, M. K., Butler, D. M., & Singhm, R. P. (1992). Investigating time in sample effect for the survey of income and program participation. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.Google Scholar
  40. Meurs, H., Wissen, L. V., & Visser, J. (1989). Measurement bases in panel data. Transportation, 16, 175–194.CrossRefGoogle Scholar
  41. Millar, M. G., & Tesser, A. (1986). Thought-induced attitude-change: The effects of schema structure and commitment. Journal of Personality and Social Psychology, 51, 259–269.CrossRefGoogle Scholar
  42. Morwitz, V. G. (2005). The effect of survey measurement on respondent behaviour. Applied Stochastic Models in Business and Industry, 21, 451–455.CrossRefGoogle Scholar
  43. Nancarrow, C., & Cartwright, T. (2007). Online access panels and tracking research: The conditioning issue. International Journal of Market Research, 49, 573–594.Google Scholar
  44. National Research Council. (2009). In C. F. Citro & J. K. Scholz (Eds.), Reengineering the Survey of Income and Program Participation. Panel on the Census Bureau’s Reengineered Survey of Income and Program Participation. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.Google Scholar
  45. Olson, J. A. (1999). Linkages with data from the social security administrative records in the health and retirement study. Social Security Bulletin, 62, 73–85.Google Scholar
  46. O’Sullivan, I., Orbell, S., Rakow, T., & Parker, R. (2004). Prospective research in health service settings: Health psychology, science and the “Hawthorne” effect. Journal of Health Psychology, 9, 355–359.CrossRefGoogle Scholar
  47. Pennell, S. G., & Lepkowski, J. N. (1992). Panel conditioning effects in the Survey of Income and Program Participation. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.Google Scholar
  48. Shack-Marquez, J. (1986). Effects of repeated interviewing on estimation of labor-force status. Journal of Economic and Social Measurement, 14, 379–398.Google Scholar
  49. Sherman, S. J. (1980). On the self-erasing nature of errors of prediction. Journal of Personality and Social Psychology, 39, 211–221.CrossRefGoogle Scholar
  50. Shockey, J. W. (1988). Adjusting for response error in panel surveys: A latent class approach. Sociological Methods & Research, 17, 65–92.CrossRefGoogle Scholar
  51. Simmons, C. J., Bickart, B. A., & Lynch, J. G. (1993). Capturing and creating public opinion in survey research. Journal of Consumer Research, 20, 316–329.CrossRefGoogle Scholar
  52. Solomon, R. L. (1949). An extension of control group design. Psychological Bulletin, 46, 137–150.CrossRefGoogle Scholar
  53. Solon, G. (1986). Effects of rotation group bias on estimation of unemployment. Journal of Business & Economic Statistics, 4(1), 105–109.Google Scholar
  54. Spangenberg, E. R., Greenwald, A. G., & Sprott, D. E. (2008). Will you read this article’s abstract? Theories of the question-behavior effect. Journal of Consumer Psychology, 18, 102–106.CrossRefGoogle Scholar
  55. Spangenberg, E. R., Sprott, D. E., Grohmann, B., & Smith, R. J. (2003). Mass-communicated prediction requests: Practical application and a cognitive dissonance explanation for self-prophecy. Journal of Marketing, 67(3), 47–62.CrossRefGoogle Scholar
  56. Sturgis, P., Allum, N., & Brunton-Smith, I. (2009). Attitudes over time: The psychology of panel conditioning. In P. Lynn (Ed.), Methodology of longitudinal surveys (pp. 113–126). New York: Wiley.CrossRefGoogle Scholar
  57. Tesser, A. (1978). Self-generated attitude change. In L. Berkowitz (Ed.), Advances in experiemental social psychology (pp. 289–338). New York: Academic Press.Google Scholar
  58. Toh, R. S., Lee, E., & Hu, M. Y. (2006). Social desirability bias in diary panels is evident in panelists’ behavioral frequency. Psychological Reports, 99, 322–334.Google Scholar
  59. Torche, F., Warren, J. R., Halpern-Manners, A., & Valenzuela, E. (2012). Panel conditioning in a longitudinal study of Chilean adolescents' substance use: Evidence from an experiment. Social Forces, 90, 891–918.CrossRefGoogle Scholar
  60. Traugott, M. W., & Katosh, J. P. (1979). Response validity in surveys of voting-behavior. Public Opinion Quarterly, 43, 359–377.CrossRefGoogle Scholar
  61. U.S. Bureau of Labor Statistics. (2006). Design and methodology: Current population survey (Technical Paper 66). Washington, DC: U.S. Department of Labor, Census Bureau.Google Scholar
  62. Voogt, R. J. J., & Van Kempen, H. (2002). Nonresponse bias and stimulus effects in the Dutch National Election Study. Quality & Quantity, 36, 325–345.CrossRefGoogle Scholar
  63. Wang, K., Cantor, D., & Safir, A. (2000). Panel conditioning in a random digit dial survey. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.Google Scholar
  64. Warren, J. R., & Halpern-Manners A. (Forthcoming). Panel conditioning in longitudinal social science surveys. Sociological Methods & Research.Google Scholar
  65. Waterton, J., & Lievesley, D. (1989). Evidence of conditioning effects in the British Social Attitudes Panel Survey. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 319–339). New York: Wiley.Google Scholar
  66. Williams, P., Block, L. G., & Fitzsimons, G. J. (2006). Simply asking questions about health behaviors increases both healthy and unhealthy behaviors. Social Influence, 1, 117–127.CrossRefGoogle Scholar
  67. Williams, P., Fitzsimons, G. J., & Block, L. G. (2004). When consumers do not recognize “benign” intention questions as persuasion attempts. Journal of Consumer Research, 31, 540–550.CrossRefGoogle Scholar
  68. Williams, W. H., & Mallows, C. L. (1970). Systematic biases in panel surveys due to differential nonresponse. Journal of the American Statistical Association, 65, 1338–1349.CrossRefGoogle Scholar
  69. Yalch, R. F. (1976). Pre-election interview effects on voter turnout. Public Opinion Quarterly, 40, 331–336.CrossRefGoogle Scholar

Copyright information

© Population Association of America 2012

Authors and Affiliations

  1. 1.Department of Sociology and Minnesota Population CenterUniversity of MinnesotaMinneapolisUSA

Personalised recommendations