Panel Conditioning and Subjective Well-being
- 568 Downloads
The importance of panel, or longitudinal, survey data for analyzing subjective wellbeing, and especially its dynamics, is increasingly recognized. Analyses of such data, however, have to deal with two potential problems: (1) non-random attrition; and (2) panel conditioning. The former is a much researched topic. In contrast, panel conditioning has received much less attention from the research community. In this analysis, longitudinal survey data collected from members of a large national probability sample of households are used to examine whether self-reported measures of psychological well-being exhibit any tendency to change over time in a way that might reflect panel conditioning. Regression models are estimated that control for all time invariant influences as well as a set of time-varying influences. We find very little evidence that mean life satisfaction scores vary with length of time in the panel, especially once non-random attrition is controlled for. In contrast, scores on a measure of mental health do vary with time, and surprisingly men and women exhibit opposing patterns. For men, scores decline over time (though the estimates are not statistically robust), whereas for women the effects are both large and rise with time. Further, for both outcome measures there is a clear narrowing in the dispersion of reported scores over the first few waves of participation. The findings have implications for empirical research employing longitudinal data.
KeywordsHILDA Survey Life satisfaction Longitudinal data Mental health Panel conditioning
The research reported on in this paper was supported by an Australian Research Council Discovery Grant (#DP1095497). The paper uses unit record data from the Household, Income and Labour Dynamics in Australia (HILDA) Survey. The HILDA Survey Project was initiated and is funded by the Australian Government Department of Families, Housing, Community Services and Indigenous Affairs (FaHCSIA) and is managed by the Melbourne Institute of Applied Economic and Social Research (Melbourne Institute). The authors also thank Robert Cummins and Nicole Watson for helpful advice and comments on an earlier version of this paper. The findings and views reported in this paper, however, are those of the authors and should not be attributed to any of the aforementioned.
- Australian Bureau of Statistics. (1997). 1995 National Health Survey: SF-36 Population Norms, Australia (ABS cat. No. 4399.0). Canberra: Australian Bureau of Statistics.Google Scholar
- Corder, L.S., & Horvitz, D. G. (1989). Panel effects in the National Medical Care Utilization and Expenditure Survey. In D. Kasprzyk, G. Duncan, G Kalton, & M. P. Singh (Eds.), Panel Surveys (pp. 304–318). New York: Wiley.Google Scholar
- Frick, J. R., Goebel, J., Schechtman, E., Wagner, G. G., & Yitzhaki, S. (2006). Using analysis of gini (ANOGI) for detecting whether two subsamples represent the same universe: The German Socio-Economic Panel Study (SOEP) experience. Sociological Methods and Research, 34(4), 427–468.CrossRefGoogle Scholar
- Hoeymans, N., Garssen, A. A., Westert, G. P., & Verhaak, P. F. (2004). Measuring mental health of the Dutch population: A comparison of the GHQ-12 and the MHI-5. Health and Quality of Life Outcomes, 2. Article No 23. [Available from: www.hqlo.com].
- Kalton, G., Kasprzyk, D., & McMillen, D. B. (1989). Nonsampling errors in panel surveys. In D. Kasprzyk, G. Duncan, G Kalton & M. P. Singh (Eds.), Panel Surveys (pp. 249–270). New York: Wiley.Google Scholar
- Landua, D. (1993). Changes in reports of satisfaction in panel surveys: An analysis of some unintentional effects of the longitudinal design. Kolner Zeitschrift fur Soziologie und Sozialpsychologie, 45(3), 553–571.Google Scholar
- Pennell, S., & Lepkowski, J. (1992). Panel conditioning effects in the Survey of Income and Program Participation. Proceedings of the Survey Research Methods Section of the American Statistical Association, 566–571.Google Scholar
- Silberstein, A. R., & Jacobs, C. A. (1989). Symptoms of repeated interview effects in the Consumer Expenditure Interview Survey. In D. Kasprzyk, G. Duncan, G Kalton, & M. P. Singh (Eds.), Panel Surveys (pp. 289–303). New York: Wiley.Google Scholar
- Summerfield, M., Freidin, S., Hahn, M., Ittak, P., Li, N., Macalalad, N., et al. (2012). HILDA User Manual – Release 11. Melbourne: Melbourne Institute of Applied Economic and Social Research, University of Melbourne.Google Scholar
- Toepoel, V., Das, M., & van Soest, A. (2009). Relating question type to panel conditioning; Comparing trained and fresh respondents. Survey Methods Research, 3(2), 73–80.Google Scholar
- Van Landeghem, B. (2012). Panel conditioning and self-reported satisfaction: Evidence from International panel data and repeated cross-sections. SOEPpapers (on Multidisciplinary Panel Data Research) no. 484. Berlin: DIW.Google Scholar
- Ware, J. E., Kosinski, M., Bjorner, J. B., Turner-Bowker, D. M., Gandek, B., & Maruish, M. E. (2000). Users’s Manual for the SF-36v2 Health Survey (2nd ed.). Lincoln, RI: QualityMetric Inc.Google Scholar
- Ware, J. E., Snow, K. K., Kosinski, M., & Gandek, B. (2007). SF-36 Health Survey: Manual and Interpretation Guide. Lincoln, RI: QualityMetric Inc.Google Scholar
- Waterton, J., & Lievesley, J. (1989). Evidence of conditioning effects in the British Social Attitudes. In D. Kasprzyk, G. Duncan, G Kalton, & M. P. Singh (Eds.), Panel Surveys (pp. 319–339). New York: Wiley.Google Scholar
- Watson, N., & Wooden, M. (2012). The HILDA Survey: A case study in the design and development of a successful household panel study. Longitudinal and Life Course Studies, 3(3), 369–381.Google Scholar
- Watson, N. & Wooden, M. (2013). Adding a top-up sample to the HILDA Survey. The Australian Economic Review, 46(4), forthcoming.Google Scholar
- Watson, N., & Wooden, M. (forthcoming). Re-engaging with survey non-respondents: Evidence from three household panels. Journal of the Royal Statistical Society: Series A (Statistics in Society).Google Scholar