Self-rated health (SRH) is widely used to study health across a range of disciplines. However, relatively little research examines how features of its measurement in surveys influence respondents’ answers and the overall quality of the resulting measurement. Manipulations of response option order and scale orientation are particularly relevant to assess for SRH given the increasing prominence of web-based survey data collection and since these factors are often outside of the control of the researcher who is analyzing data collected by other investigators. We examine how the interplay of two features of SRH influence respondents’ answers in a 2-by-3 factorial experiment that varies (1) the order in which the response options are presented (“excellent” to “poor” or “poor” to “excellent”) and (2) the orientation of the response option scale (vertical, horizontal, or banked). The experiment was conducted online using workers from Amazon Mechanical Turk (N = 2945). We find no main effects of response scale orientation and no interaction between response option order and scale orientation. However, we find main effects of response option order: mean SRH and the proportion in “excellent” or “very good” health are higher (better) and the proportion in “fair” or “poor” health lower when the response options are ordered from “excellent” to “poor” compared to “poor” to “excellent.” We also see heterogeneous treatment effects of response option ordering across respondents’ characteristics associated with ability. Overall, the implications for the validity and cross-survey comparability of SRH are likely considerable for response option ordering and minimal for scale orientation.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
It is interesting to note that this finding was opposite to the direction hypothesized. Toepoel et al. (2009) hypothesized that responses would be shifted to the left side of the scale (which was the positive side of the scale) given that more hand/eye movement is needed to select the options on the right side of the scale in the horizontal format.
The significant difference across language spoken in childhood household remains when controlling for whether the respondent currently resides in the US, since respondents who grew up in a non-English speaking household may vary in their ability with the English language in terms of where they live now. That the significant difference remains is likely because respondents tendverifiable identities, compressing the variability across these measures.
Antoun, C., Zhang, C., Conrad, F. G., & Schober, M. F. (2016). Comparisons of online recruitment strategies for convenience samples: Craigslist, Google AdWords, Facebook, and Amazon Mechanical Turk. Field Methods, 28(3), 231–246.
Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking Questions: The Definitive Guide to Questionnaire Design. New York: Wiley.
Carp, F. M. (1974). Position Effects on Interview Responses. Journal of Gerontology, 29, 581–587.
Chan, J. C. (1991). Response-Order Effects in Likert-Type Scales. Educational and Psychological Measurement, 51, 531–540.
Christian, L. M., & Dillman, D. A. (2004). The Influence of Graphical and Symbolic Language Manipulations on Responses to Self-Administered Questions. Public Opinion Quarterly, 68, 57–80.
DeSalvo, K. B., Bloser, N., Reynolds, K., He, J., & Muntner, P. (2006). Mortality Prediction with a Single General Self-Rated Health Question. Journal of General Internal Medicine, 21, 267–275.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method (Fourth ed.). Hoboken: John Wiley & Sons, Inc..
Friedman, L. W., & Friedman, H. H. (1994). A Comparison of Vertical and Horizontal Rating Scales. The Mid-Atlantic Journal of Business, 30, 107–111.
Garbarski, D. (2016). Research in and Prospects for the Measurement of Health using Self-Rated Health. Public Opinion Quarterly, 80, 977–997.
Garbarski, D., Schaeffer, N. C., & Dykema, J. (2015). The Effects of Question Order and Response Option Order on Self-Rated Health. Quality of Life Research, 24, 1443–1453.
Garbarski, D., Schaeffer, N. C., & Dykema, J. (2016). The Effect of Response Option Order on Self-Rated Health: A Replication Study. Quality of Life Research, 25, 2117–2121.
Holbrook, A. L., Krosnick, J. A., Carson, R. T., & Mitchell, R. C. (2000). Violating Conversational Conventions Disrupts Cognitive Processing of Attitude Questions. Journal of Experimental Social Psychology, 36, 465–494.
Idler, E. L., & Benyamini, Y. (1997). Self-Rated Health and Mortality: A Review of Twenty-Seven Community Studies. Journal of Health and Social Behavior, 38, 21–37.
Jürges, H. (2007). True Health Vs Response Styles: Exploring Cross-Country Differences in Self-Reported Health. Health Economics, 16(2), 163–178.
Jylhä, M. (2009). What Is Self-Rated Health and Why Does It Predict Mortality? Towards a Unified Conceptual Model. Social Science & Medicine, 69, 307–316.
Krosnick, J. A. (1991). Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology, 5, 213–236.
Krosnick, J. A. (1999). Survey Research. Annual Review of Psychology, 50, 537–567.
Krosnick, J. A., Narayan, S., & Smith, W. R. (1996). Satisficing in Surveys: Initial Evidence. New Directions for Evaluation, 70, 29–44.
Mavaddat, N., Valderas, J. M., van der Linde, R., Khaw, K. T., & Kinmonth, A. L. (2014). Association of Self-Rated Health with Multimorbidity, Chronic Disease and Psychosocial Factors in a Large Middle-Aged and Older Cohort from General Practice: A Cross-Sectional Study. BMC Family Practice, 15(1), 185.
Means, B., Nigam, A., Zarrow, M., Loftus, E. F., & Donaldson, M. S. (1989). Autobiographical memory for health-related events. Washington, DC: US Department of Health and Human Services, Public Health Service, Center for Disease Control, National Center for Health Statistics.
Mingay, D. J., & Greenwell, M. T. (1989). Memory Bias and Response-Order Effects. Journal of Official Statistics, 5, 253–263.
OECD. (2015). Health at a Glance 2015: OECD Indicators. Paris: OECD Publishing.
Schaeffer, N. C., & Dykema, J. (2011). Questions for surveys: current trends and future directions. Public Opinion Quarterly, 75(5), 909–961.
Schwarz, N. (1996). Cognition and communication: Judgmental biases, research methods, and the logic of conversation. Mahwah: Lawrence Erlbaum.
Smyth, J. (2014). Visual design in surveys: using numbers, symbols, and graphics effectively. Washington, DC: Webinar sponsored by Midwest Association for Public Opinion Research (MAPOR).
Smyth, J. D., Dillman, D. A., Christian, L. M., & Stern, M. J. (2006). Effects of using visual design principles to group response options in web surveys. International Journal of Internet Science, 1, 6–16.
Stern, M. J., Dillman, D. A., & Smyth, J. D. (2007). Visual design, order effects, and respondent characteristics in a self-administered survey. Survey Research Methods, 13, 121–138.
Sudman, S., & Bradburn, N. M. (1982). Asking Questions Jossey-Bass.
Toepoel, V., Das, M., & van Soest, A. (2009). Design of Web Questionnaires: The Effect of Layout in Rating Scales. Journal of Official Statistics, 25, 509–528.
Tourangeau, R., Couper, M. P., & Conrad, F. (2004). Spacing, Position, and Order: Interpretive Heuristics for Visual Features of Survey Questions. Public Opinion Quarterly, 68, 368–393.
Tourangeau, R., Couper, M. P., & Conrad, F. (2007). Color, Labels, and Interpretive Heuristics for Response Scales. Public Opinion Quarterly, 71, 91–112.
Tourangeau, R., Couper, M. P., & Conrad, F. G. (2013). “Up Means Good”: The Effect of Screen Position on Evaluative Ratings in Web Surveys. Public Opinion Quarterly, 77(S1), 69–88.
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131.
Yan, T., & Keusch, F. (2015). The Effects of the Direction of Rating Scales on Survey Responses in a Telephone Survey. Public Opinion Quarterly, 79, 145–165.
Yan, T., & Tourangeau, R. (2008). Fast Times and Easy Questions: The Effects of Age, Experience and Question Complexity on Web Survey Response Times. Applied Cognitive Psychology, 22, 51–68.
This work was supported by core funding to the Center for Demography and Ecology [R24 HD047873] and Center for Demography of Health and Aging [P30 AG017266] at the University of Wisconsin–Madison. The authors thank Ashley Baber and Bill Byrnes for research assistance. The opinions expressed herein are those of the authors.
This work was supported by core funding to the Center for Demography and Ecology from the National Institutes of Health, R24 HD047873, and Center for Demography of Health and Aging from the National Institute on Aging, P30 AG017266, at the University of Wisconsin–Madison.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent was obtained from all individual participants included in the study.
This study was approved by the Social and Behavioral Sciences Institutional Review Board at Loyola University Chicago.
The authors declare that they have no competing interests.
About this article
Cite this article
Garbarski, D., Schaeffer, N.C. & Dykema, J. The Effects of Features of Survey Measurement on Self-Rated Health: Response Option Order and Scale Orientation. Applied Research Quality Life 14, 545–560 (2019). https://doi.org/10.1007/s11482-018-9628-x
- Self-rated health
- Questionnaire design
- Response option order
- Scale orientation
- Web survey
- Amazon mechanical turk