Abstract
Does participating in a longitudinal survey affect respondents’ answers to subsequent questions about their labor force characteristics? In this article, we investigate the magnitude of panel conditioning or time-in-survey biases for key labor force questions in the monthly Current Population Survey (CPS). Using linked CPS records for household heads first interviewed between January 2007 and June 2010, our analyses are based on strategic within-person comparisons across survey months and between-person comparisons across CPS rotation groups. We find considerable evidence for panel conditioning effects in the CPS. Panel conditioning downwardly biases the CPS-based unemployment rate, mainly by leading people to remove themselves from its denominator. Across surveys, CPS respondents (claim to) leave the labor force in greater numbers than otherwise equivalent respondents who are participating in the CPS for the first time. The results cannot be attributed to panel attrition or mode effects. We discuss implications for CPS-based research and policy as well as for survey methodology more broadly.
Similar content being viewed by others
Notes
For example, by our count, of the 909 articles published in Demography between 1990 and 2010, at least 75 (or 8.2 %) of them featured original analyses of CPS data; another 61 (or 6.7 %) utilized CPS data in some other capacity.
For a more comprehensive review of this literature, see Warren and Halpern-Manners (forthcoming).
For instance, Chandon et al. (2004) showed that randomly selected customers of an online grocer who were asked about their intentions to make future purchases were substantially more likely than members of a randomly selected control group to make subsequent purchases from that grocer.
It is important to distinguish panel conditioning from social desirability biases. The latter may lead people to underreport unemployment, but it will lead to consistent levels of underreporting across CPS surveys—and thus would not bias estimates of change over time. Panel conditioning, in contrast, would lead to underreporting of unemployment on follow-up, but not baseline, waves of the CPS—and thus would bias estimates of within-person change over time.
In fact, survey length is not appreciably affected by whether respondents indicate that they are employed, unemployed, or out of the labor force. However, because they do not observe the counterfactual pathway through the CPS interview, respondents may believe that an alternate response may have led to fewer questions. This suspicion may be enough to lead some respondents to provide different (and inaccurate) responses in subsequent months.
In this and subsequent comparisons, observations are weighted by the cross-sectional survey weight in the “origin” month.
In the absence of panel conditioning, we would certainly not expect unemployment rates for people in MIS 1 to be exactly equal to those for people in MIS 2 in any particular calendar month; sometimes they will be higher, and sometimes they will be lower. However, in the absence of panel conditioning, we would expect the difference to be positive as often as it is negative. As shown in Fig. 1, the unemployment rate is higher for those in MIS 2 in 29 of 41 (71 %) possible comparisons. If we take this to be a binomial process (where 1 means that MIS1 – MIS2 is negative, and 0 means that MIS1 – MIS2 is positive), and if we take Pr(MIS1 – MIS2 < 0) to be .5, then the probability of getting 29 or more negative values of 41 trials is about .005. This suggests that our findings are not the result of sampling error.
In an effort to account for panel attrition effects, in this supplementary analysis, we have used a form of poststratification weighting to cause the two groups of household heads to be identical with respect to their distributions of age, race/ethnicity, sex, marital status, region of residence, urbanicity, and nativity status; this procedure is detailed in Warren and Halpern-Manners (forthcoming).
It is important to note that panel conditioning in the CPS leads to bias in estimated levels of unemployment, but does not lead to systematic bias in short- or long-term trends in that rate.
References
Anderson, B. A., Silver, B. D., & Abramson, P. R. (1988). The effects of race of the interviewer on measures of electoral participation by blacks in SRC national election studies. Public Opinion Quarterly, 52, 53–83.
Bailar, B. A. (1975). Effects of rotation group bias on estimates from panel surveys. Journal of the American Statistical Association, 70(349), 23–30.
Bailar, B. A. (1989). Information needs, surveys, and measurement errors. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 1–24). New York: Wiley.
Bartels, L. M. (1999). Panel effects in the American national election studies. Political Analysis, 8, 1–20.
Battaglia, M. P., Zell, E., & Ching, P. (1996). Can participating in a panel sample introduce bias into trend estimates? (National Immunization Survey Working Paper). Washington, DC: National Cenrter for Health Statistics. Retrieved from http://www.cdc.gov/nis/pdfs/estimation_weighting/battaglia1996b.pdf
Borle, S., Dholakia, U. M., Singh, S. S., & Westbrook, R. A. (2007). The impact of survey participation on subsequent customer behavior: an empirical investigation. Marketing Science, 26, 711–726.
Bridge, R. G., Reeder, L. G., Kanouse, D., Kinder, D. R., Nagy, V. T., & Judd, C. M. (1977). Interviewing changes attitudes—Sometimes. Public Opinion Quarterly, 41, 56–64.
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally.
Cantor, D. (2008). A review and summary of studies on panel conditioning. In S. Menard (Ed.), Handbook of longitudinal research: Design, measurement, and analysis (pp. 123–138). Burlington, MA: Academic Press.
Chandon, P., Morwitz, V. G., & Reinartz, W. J. (2004). The short- and long-term effects of measuring intent to repurchase. Journal of Consumer Research, 31, 566–572.
Chandon, P., Morwitz, V. G., & Reinartz, W. J. (2005). Do intentions really predict behavior? Self-generated validity effects in survey research. Journal of Marketing, 69(2), 1–14.
Clausen, A. R. (1968). Response validity: Vote report. Public Opinion Quarterly, 32, 588–606.
Das, M., Toepoel, V., & van Soest, A. (2007). Can I use a panel? Panel conditioning and attrition bias in panel surveys (CentER Discussion Paper Series No. 2007-56). Tilburg, The Netherlands: Tilburg University CentER.
Das, M., Toepoel, V., & van Soest, A. (2011). Nonparametric tests of panel conditioning and attrition bias in panel surveys. Sociological Methods & Research, 40, 32–56.
De Amici, D., Klersy, C., Ramajoli, F., Brustia, L., & Politi, P. (2000). Impact of the Hawthorne effect in a longitudinal clinical study: The case of anesthesia. Controlled Clinical Trials, 21, 103–114.
Dholakia, U. M., & Morwitz, V. G. (2002). The scope and persistence of mere-measurement effects: Evidence from a field study of customer satisfaction measurement. Journal of Consumer Research, 29, 159–167.
Duan, N., Alegria, M., Canino, G., McGuire, T. G., & Takeuchi, D. (2007). Survey conditioning in self-reported mental health service use: Randomized comparison of alternative instrument formats. Health Services Research, 42, 890–907.
Fazio, R. H. (1989). On the power and functionality of attitudes: The role of attitude accessibility. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 153–179). Hillsdale, NJ: Lawrence Erlbaum Associates.
Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, E. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50, 229–238.
Feldman, J. M., & Lynch, J. G. (1988). Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior. Journal of Applied Psychology, 73, 421–435.
Fitzsimons, G. J., & Moore, S. G. (2008). Should we ask our children about sex, drugs and rock & roll? Potentially harmful effects of asking questions about risky behaviors. Journal of Consumer Psychology, 18, 82–95.
Fitzsimons, G. J., Nunes, J. C., & Williams, P. (2007). License to sin: The liberating role of reporting expectations. Journal of Consumer Research, 34(1), 22–31.
Fitzsimons, G. J., & Williams, P. (2000). Asking questions can change choice behavior: Does it do so automatically or effortfully? Journal of Experimental Psychology-Applied, 6, 195–206.
Godin, G., Sheeran, P., Conner, M., & Germain, M. (2008). Asking questions changes behavior: Mere measurement effects on frequency of blood donation. Health Psychology, 27, 179–184.
Granberg, D., & Holmberg, S. (1992). The Hawthorne effect in election studies—The impact of survey participation on voting. British Journal of Political Science, 22, 240–247.
Greenwald, A. G., Carnot, C. G., Beach, R., & Young, B. (1987). Increasing voting-behavior by asking people if they expect to vote. Journal of Applied Psychology, 72, 315–318.
Hansen, M. H., Hurwitz, W. N., Nisselson, H., & Steinberg, J. (1955). The redesign of the census current population survey. Journal of the American Statistical Association, 50, 701–719.
Hernandez, L. M., Durch, J. S., Blazer, D. G., & Hoverman, I. V. (1999). Gulf War veterans: Measuring health. Committee on Measuring the Health of Gulf War Veterans, Division of Health Promotion and Disease Prevention Institute of Medicine. Washington, DC: National Academies Press.
Holt, D. (1989). Panel conditioning: Discussion. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 340–347). New York: Wiley.
Janiszewski, C., & Chandon, E. (2007). Transfer-appropriate processing response fluency, and the mere measurement effect. Journal of Marketing Research, 44, 309–323.
Jensen, P. S., Watanabe, H. K., & Richters, J. E. (1999). Who’s up first? Testing for order effects in structured interviews using a counterbalanced experimental design. Journal of Abnormal Child Psychology, 27, 439–445.
Kalton, G., & Citro, C. F. (2000). Panel surveys: Adding the fourth dimension. In D. Rose (Ed.), Researching social and economic change (pp. 36–53). London, UK: Routledge.
Kessler, R. C., Wittchen, H.-U., Abelson, J . A., McGonagle, K., Schwarz, N., Kendler, K. S., . . . Zhao, S. (1988). Methodological studies of the Composite International Diagnostic Interview (CIDI) in the US National Comorbidity Survey (NCS). International Journal of Methods in Psychiatric Research, 7, 33–55.
Kraut, R. E., & McConahay, J. B. (1973). How being interviewed affects voting: An experiment. Public Opinion Quarterly, 37, 398–406.
Landsberger, H. A. (1958). Hawthorne revisited. Ithaca, NY: Cornell University.
Levav, J., & Fitzsimons, G. J. (2006). When questions change behavior—The role of ease of representation. Psychological Science, 17, 207–213.
Lucas, C. P., Fisher, P., Piacentini, J., Zhang, H., Jensen, P. S., Shaffer, D., . . . Canino, G. (1999). Features of interview questions associated with attenuation of symptom reports. Journal of Abnormal Child Psychology, 27, 429–437.
Mathiowetz, N. A., & Lair, T. J. (1994). Getting better? Change or error in the measurement of functional limitations. Journal of Economic and Social Measurement, 20, 237–262.
McCormick, M. K., Butler, D. M., & Singhm, R. P. (1992). Investigating time in sample effect for the survey of income and program participation. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.
Meurs, H., Wissen, L. V., & Visser, J. (1989). Measurement bases in panel data. Transportation, 16, 175–194.
Millar, M. G., & Tesser, A. (1986). Thought-induced attitude-change: The effects of schema structure and commitment. Journal of Personality and Social Psychology, 51, 259–269.
Morwitz, V. G. (2005). The effect of survey measurement on respondent behaviour. Applied Stochastic Models in Business and Industry, 21, 451–455.
Nancarrow, C., & Cartwright, T. (2007). Online access panels and tracking research: The conditioning issue. International Journal of Market Research, 49, 573–594.
National Research Council. (2009). In C. F. Citro & J. K. Scholz (Eds.), Reengineering the Survey of Income and Program Participation. Panel on the Census Bureau’s Reengineered Survey of Income and Program Participation. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.
Olson, J. A. (1999). Linkages with data from the social security administrative records in the health and retirement study. Social Security Bulletin, 62, 73–85.
O’Sullivan, I., Orbell, S., Rakow, T., & Parker, R. (2004). Prospective research in health service settings: Health psychology, science and the “Hawthorne” effect. Journal of Health Psychology, 9, 355–359.
Pennell, S. G., & Lepkowski, J. N. (1992). Panel conditioning effects in the Survey of Income and Program Participation. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.
Shack-Marquez, J. (1986). Effects of repeated interviewing on estimation of labor-force status. Journal of Economic and Social Measurement, 14, 379–398.
Sherman, S. J. (1980). On the self-erasing nature of errors of prediction. Journal of Personality and Social Psychology, 39, 211–221.
Shockey, J. W. (1988). Adjusting for response error in panel surveys: A latent class approach. Sociological Methods & Research, 17, 65–92.
Simmons, C. J., Bickart, B. A., & Lynch, J. G. (1993). Capturing and creating public opinion in survey research. Journal of Consumer Research, 20, 316–329.
Solomon, R. L. (1949). An extension of control group design. Psychological Bulletin, 46, 137–150.
Solon, G. (1986). Effects of rotation group bias on estimation of unemployment. Journal of Business & Economic Statistics, 4(1), 105–109.
Spangenberg, E. R., Greenwald, A. G., & Sprott, D. E. (2008). Will you read this article’s abstract? Theories of the question-behavior effect. Journal of Consumer Psychology, 18, 102–106.
Spangenberg, E. R., Sprott, D. E., Grohmann, B., & Smith, R. J. (2003). Mass-communicated prediction requests: Practical application and a cognitive dissonance explanation for self-prophecy. Journal of Marketing, 67(3), 47–62.
Sturgis, P., Allum, N., & Brunton-Smith, I. (2009). Attitudes over time: The psychology of panel conditioning. In P. Lynn (Ed.), Methodology of longitudinal surveys (pp. 113–126). New York: Wiley.
Tesser, A. (1978). Self-generated attitude change. In L. Berkowitz (Ed.), Advances in experiemental social psychology (pp. 289–338). New York: Academic Press.
Toh, R. S., Lee, E., & Hu, M. Y. (2006). Social desirability bias in diary panels is evident in panelists’ behavioral frequency. Psychological Reports, 99, 322–334.
Torche, F., Warren, J. R., Halpern-Manners, A., & Valenzuela, E. (2012). Panel conditioning in a longitudinal study of Chilean adolescents' substance use: Evidence from an experiment. Social Forces, 90, 891–918.
Traugott, M. W., & Katosh, J. P. (1979). Response validity in surveys of voting-behavior. Public Opinion Quarterly, 43, 359–377.
U.S. Bureau of Labor Statistics. (2006). Design and methodology: Current population survey (Technical Paper 66). Washington, DC: U.S. Department of Labor, Census Bureau.
Voogt, R. J. J., & Van Kempen, H. (2002). Nonresponse bias and stimulus effects in the Dutch National Election Study. Quality & Quantity, 36, 325–345.
Wang, K., Cantor, D., & Safir, A. (2000). Panel conditioning in a random digit dial survey. Presented at Proceedings of the Survey Research Methods Section of the American Statistical Association.
Warren, J. R., & Halpern-Manners A. (Forthcoming). Panel conditioning in longitudinal social science surveys. Sociological Methods & Research.
Waterton, J., & Lievesley, D. (1989). Evidence of conditioning effects in the British Social Attitudes Panel Survey. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 319–339). New York: Wiley.
Williams, P., Block, L. G., & Fitzsimons, G. J. (2006). Simply asking questions about health behaviors increases both healthy and unhealthy behaviors. Social Influence, 1, 117–127.
Williams, P., Fitzsimons, G. J., & Block, L. G. (2004). When consumers do not recognize “benign” intention questions as persuasion attempts. Journal of Consumer Research, 31, 540–550.
Williams, W. H., & Mallows, C. L. (1970). Systematic biases in panel surveys due to differential nonresponse. Journal of the American Statistical Association, 65, 1338–1349.
Yalch, R. F. (1976). Pre-election interview effects on voter turnout. Public Opinion Quarterly, 40, 331–336.
Acknowledgments
Order of authorship is alphabetical to reflect equal contributions by the authors. This article was inspired by a conversation with Michael Hout, and was originally prepared for presentation at the April 2010 annual meetings of the Population Association of America. The National Science Foundation (SES-0647710) and the University of Minnesota’s Life Course Center, Department of Sociology, College of Liberal Arts, and Minnesota Population Center have provided support for this project. We warmly thank Eric Grodsky, Ross Macmillan, Gregory Weyland, anonymous reviewers, and workshop participants at the University of Minnesota, the University of Texas, the University of Wisconsin-Madison, and New York University for their constructive criticism and comments. Finally, we thank Anne Polivka, Dorinda Allard, and Steve Miller at the U.S. Bureau of Labor Statistics for their helpful feedback. However, all errors or omissions are the authors’ responsibility.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Halpern-Manners, A., Warren, J.R. Panel Conditioning in Longitudinal Studies: Evidence From Labor Force Items in the Current Population Survey. Demography 49, 1499–1519 (2012). https://doi.org/10.1007/s13524-012-0124-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13524-012-0124-x