Skip to main content

Advertisement

Log in

An Overview of Data Collection in Health Preference Research

  • Practical Application
  • Published:
The Patient - Patient-Centered Outcomes Research Aims and scope Submit manuscript

Abstract

This paper focuses on survey administration and data collection methods employed for stated-preference studies in health applications. First, it describes different types of survey administration methods, encompassing web-based surveys, face-to-face (in-person) surveys, and mail surveys. Second, the concept of sampling frames is introduced, clarifying distinctions between the target population and survey frame population. The discussion then extends to different types of sampling methods, such as probability and non-probability sampling, along with an evaluation of potential issues associated with different sampling methods within the context of health preference research. Third, the paper provides information about different recruitment methods, including web-surveys, leveraging patient groups, and in-clinic recruitment. Fourth, a crucial aspect addressed is the calculation of response rate, with insights into determining an adequate response rate and strategies to improve response rates in stated-preference surveys. Lastly, the paper concludes by discussing data management plans and suggesting insights for future research in this field. In summary, this paper examines the nuanced aspects of survey administration and data collection methods in stated-preference studies, offering valuable guidance for researchers and practitioners in the health domain.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Not applicable.

References

  1. Davino C, Fabbris L. Survey data collection and integration, vol. 1. Springer; 2013.

    Book  Google Scholar 

  2. Buring JE. Primary data collection: what should well-trained epidemiology doctoral students be able to do? Epidemiology. 2008;19(2):347–9.

    Article  PubMed  Google Scholar 

  3. Couper MP. The future of modes of data collection. Public Opin Q. 2011;75(5):889–908.

    Article  Google Scholar 

  4. Sylvia ML. Primary data collection. Clinical Analytics and Data Management for the DNP, 2018; p. 87–96.

  5. Dillman DA. The design and administration of mail surveys. Ann Rev Sociol. 1991;17(1):225–49.

    Article  Google Scholar 

  6. Stedman RC, et al. The end of the (research) world as we know it? Understanding and coping with declining response rates to mail surveys. Soc Nat Resour. 2019;32(10):1139–54.

    Article  Google Scholar 

  7. Daikeler J, Bošnjak M, Lozar-Manfreda K. Web versus other survey modes: an updated and extended meta-analysis comparing response rates. J Survey Stat Methodol. 2020;8(3):513–39.

    Article  Google Scholar 

  8. Watson V, et al. Mode and frame matter: assessing the impact of survey mode and sample frame in choice experiments. Med Decis Making. 2019;39(7):827–41.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Link MW, et al. A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opin Q. 2008;72(1):6–27.

    Article  Google Scholar 

  10. Paulsen A, Overgaard S, Lauritsen JM. Quality of data entry using single entry, double entry and automated forms processing—an example based on a study of patient-reported outcomes. PLoS One. 2012;7(4): e35087.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Kisala PA, et al. Interviewer-versus self-administration of PROMIS® measures for adults with traumatic injury. Health Psychol. 2019;38(5):435.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Vivo S, et al. How accurate is our misinformation? A randomized comparison of four survey interview methods to measure risk behavior among young adults in the Dominican Republic. Dev Eng. 2017;2:53–67.

    Article  Google Scholar 

  13. Simmons AD, Bobo LD. Can non-full-probability internet surveys yield useful data? A comparison with full-probability face-to-face surveys in the domain of race and social inequality attitudes. Sociol Methodol. 2015;45(1):357–87.

    Article  Google Scholar 

  14. Rowen D, et al. Comparison of modes of administration and alternative formats for eliciting societal preferences for burden of illness. Appl Health Econ Health Policy. 2016;14:89–104.

    Article  PubMed  Google Scholar 

  15. Jiang R, et al. Comparison of preferences and data quality between discrete choice experiments conducted in online and face-to-face respondents. Med Decis Making. 2023. https://doi.org/10.1177/0272989X231171912.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Veldwijk J, et al. Exploring how individuals complete the choice tasks in a discrete choice experiment: an interview study. BMC Med Res Methodol. 2016;16:1–11.

    Article  Google Scholar 

  17. Whitty JA, et al. A think aloud study comparing the validity and acceptability of discrete choice and best worst scaling methods. PLoS One. 2014;9(4): e90635.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Harkness J, Pennell BE and Schoua‐Glusberg A. Survey questionnaire translation and assessment. Methods for Testing and Evaluating Survey Questionnaires, 2004; p. 453–73.

  19. Rolland L. ‘I’m sure at some point we’ll be switching’: planning and enacting an interview language policy with multilingual participants. J Multiling Multicult Dev. 2023;44(8):702–17.

    Article  Google Scholar 

  20. Braekman E, et al. Unit response and costs in web versus face-to-face data collection: comparison of two cross-sectional health surveys. J Med Internet Res. 2022;24(1): e26299.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Sperber AD et al. Face‐to‐face interviews versus Internet surveys: comparison of two data collection methods in the Rome foundation global epidemiology study: implications for population‐based research. Neurogastroenterol Motility 2023;35(6):e14583.

  22. Bianchi A, Biffignandi S, Lynn P. Web-face-to-face mixed-mode design in a longitudinal survey: effects on participation rates, sample composition, and costs. J Official Stat. 2017;33(2):385–408.

    Article  Google Scholar 

  23. Huls SP, van Exel J, de Bekker-Grob EW. An attempt to decrease social desirability bias: the effect of cheap talk mitigation on internal and external validity of discrete choice experiments. Food Qual Prefer. 2023;111: 104986.

    Article  Google Scholar 

  24. Holbrook AL, Green MC, Krosnick JA. Telephone versus face-to-face interviewing of national probability samples with long questionnaires: comparisons of respondent satisficing and social desirability response bias. Public Opin Q. 2003;67(1):79–125.

    Article  Google Scholar 

  25. King BM. The influence of social desirability on sexual behavior surveys: a review. Arch Sex Behav. 2022;51(3):1495–501.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Skedgel C, et al. How do people with experience of infertility value different aspects of assistive reproductive therapy? Results from a multi-country discrete choice experiment. Patient. 2021. https://doi.org/10.1007/s40271-021-00563-7.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Terris-Prestholt F, et al. How much demand for new HIV prevention technologies can we really expect? Results from a discrete choice experiment in South Africa. PLoS One. 2013;8(12): e83193.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Vass CM, Boeri M. Mobilising the next generation of stated-preference studies: the association of access device with choice behaviour and data quality. Patient. 2021;14:55–63.

    Article  PubMed  Google Scholar 

  29. Schmidt WC. World-Wide Web survey research: benefits, potential problems, and solutions. Behav Res Methods Instrum Comput. 1997;29(2):274–9.

    Article  Google Scholar 

  30. Scott A, et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Med Res Methodol. 2011;11(1):1–12.

    Article  Google Scholar 

  31. Schleyer TK, Forrest JL. Methods for the design and administration of web-based surveys. J Am Med Inform Assoc. 2000;7(4):416–25.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Huang H-M. Do print and Web surveys provide the same results? Comput Hum Behav. 2006;22(3):334–50.

    Article  Google Scholar 

  33. Determann D, et al. Impact of survey administration mode on the results of a health-related discrete choice experiment: online and paper comparison. Value Health. 2017;20(7):953–60.

    Article  PubMed  Google Scholar 

  34. Leisher C. A comparison of tablet-based and paper-based survey data collection in conservation projects. Soc Sci. 2014;3(2):264–71.

    Article  Google Scholar 

  35. Oliveri S, et al. Opportunities and challenges of web-based and remotely administered surveys for patient preference studies in a vulnerable population. Patient Prefer Adherence. 2021;15:2509–17.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Weber S. A step-by-step procedure to implement discrete choice experiments in Qualtrics. Soc Sci Comput Rev. 2021;39(5):903–21.

    Article  Google Scholar 

  37. Janssen EM, Hauber AB, Bridges JF. Conducting a discrete-choice experiment study following recommendations for good research practices: an application for eliciting patient preferences for diabetes treatments. Value Health. 2018;21(1):59–68.

    Article  PubMed  Google Scholar 

  38. Feroz AS, et al. Using mobile phones to improve young people sexual and reproductive health in low and middle-income countries: a systematic review to identify barriers, facilitators, and range of mHealth solutions. Reprod Health. 2021;18(1):1–13.

    Article  Google Scholar 

  39. Kazi AM, et al. Characteristics of mobile phone access and usage among caregivers in Pakistan—a mHealth survey of urban and rural population. Int J Med Inform. 2021;156: 104600.

    Article  PubMed  Google Scholar 

  40. Fletcher BR, et al. Exploring women’s preferences for birth settings in England: a discrete choice experiment. PLoS One. 2019;14(4): e0215098.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  41. Li L, et al. Stay-at-home orders and the willingness to stay home during the COVID-19 pandemic: a stated-preference discrete choice experiment. PLoS One. 2021;16(7): e0253910.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Degeling C, et al. Changes in public preferences for technologically enhanced surveillance following the COVID-19 pandemic: a discrete choice experiment. BMJ Open. 2020;10(11): e041592.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Ansolabehere S, Schaffner BF. Distractions: the incidence and consequences of interruptions for survey respondents. J Survey Stat Methodol. 2015;3(2):216–39.

    Article  Google Scholar 

  44. Lindhjem H, Navrud S. Using internet in stated preference surveys: a review and comparison of survey modes. Int Rev Environ Resour Econ (Forthcoming). 2011. https://doi.org/10.1561/101.00000045.

    Article  Google Scholar 

  45. Wang J, et al. Identifying and preventing fraudulent responses in online public health surveys: lessons learned during the COVID-19 pandemic. PLOS Global Public Health. 2023;3(8): e0001452.

    Article  PubMed  PubMed Central  Google Scholar 

  46. DeMatteis JM, et al. Falsification in surveys. Washington: American Association for Public Opinion Research; 2020.

    Google Scholar 

  47. Ryan M, et al. Survey modes comparison in contingent valuation: internet panels and mail surveys. Health Econ. 2020;29(2):234–42.

    Article  PubMed  Google Scholar 

  48. Mulhern B, et al. Binary choice health state valuation and mode of administration: head-to-head comparison of online and CAPI. Value Health. 2013;16(1):104–13.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Boyle KJ, et al. Investigating internet and mail implementation of stated-preference surveys while controlling for differences in sample frames. Environ Resource Econ. 2016;64:401–19.

    Article  Google Scholar 

  50. Turner AG. Sampling frames and master samples. United Nations Secretariat Statistics Division, 2003; p. 1–26.

  51. Kalton G. Introduction to survey sampling. Sage Publications; 2020.

    Google Scholar 

  52. Noor S, Tajik O, Golzar J. Simple random sampling. Int J Educ Language Stud. 2022;1(2):78–82.

    Google Scholar 

  53. Bansback N, et al. Testing a discrete choice experiment including duration to value health states for large descriptive systems: addressing design and sampling issues. Soc Sci Med. 2014;114:38–48.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Mostafa SA, Ahmad IA. Recent developments in systematic sampling: a review. J Stat Theory Pract. 2018;12(2):290–310.

    Article  Google Scholar 

  55. Blasius J, Brandt M. Representativeness in online surveys through stratified samples. Bull Sociol Methodol/Bulletin de Méthodologie Sociologique. 2010;107(1):5–21.

    Article  Google Scholar 

  56. Khan MG, Reddy KG, Rao DK. Designing stratified sampling in economic and business surveys. J Appl Stat. 2015;42(10):2080–99.

    Article  Google Scholar 

  57. Himelein K, Eckman S and Murray S. The use of random geographic cluster sampling to survey pastoralists. World Bank policy research working paper, 2013(6589).

  58. Milligan P, Njie A, Bennett S. Comparison of two cluster sampling methods for health surveys in developing countries. Int J Epidemiol. 2004;33(3):469–76.

    Article  PubMed  Google Scholar 

  59. Sedgwick P. Stratified cluster sampling. BMJ. 2013. https://doi.org/10.1136/bmj.f7016.

    Article  PubMed  Google Scholar 

  60. Tchoubi S, et al. Prevalence and risk factors of overweight and obesity among children aged 6–59 months in Cameroon: a multistage, stratified cluster sampling nationwide survey. PLoS One. 2015;10(12): e0143215.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Vehovar V, Toepoel V and Steinmetz S. Non-probability sampling. Vol. 1. 2016: The Sage Handbook of Survey Methods.

  62. Lehdonvirta V, et al. Social media, web, and panel surveys: using non-probability samples in social and policy research. Policy Internet. 2021;13(1):134–55.

    Article  Google Scholar 

  63. Johnson LC, et al. Sampling bias and other methodological threats to the validity of health survey research. Int J Stress Manag. 2000;7(4):247–67.

    Article  Google Scholar 

  64. Deaton A. The analysis of household surveys: a microeconometric approach to development policy. 1997: World Bank Publications.

  65. Russell ES, et al. 1604. Predicted Uptake of Novel HIV Treatment Options in the United States. In: Open Forum Infectious Diseases. 2023. Oxford University Press US.

  66. Quaife M, et al. Divergent preferences for HIV prevention: a discrete choice experiment for multipurpose HIV prevention products in South Africa. Med Decis Making. 2018;38(1):120–33.

    Article  PubMed  Google Scholar 

  67. Vass CM, et al. Matching and weighting in stated preferences for health care. J Choice Modell. 2022;44: 100367.

    Article  Google Scholar 

  68. Valliant R, Dever JA. Survey weights: a step-by-step guide to calculation, vol. 1. College Station: Stata Press; 2018.

    Google Scholar 

  69. Hensher DA, Rose JM, Greene WH. Applied choice analysis: a primer. Cambridge University Press; 2005.

    Book  Google Scholar 

  70. Arora N, et al. Understanding the importance of non-material factors in retaining community health workers in low-income settings: a qualitative case-study in Ethiopia. BMJ Open. 2020;10(10): e037989.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Phillips CV, LaPole LM. Quantifying errors without random sampling. BMC Med Res Methodol. 2003;3:1–10.

    Article  Google Scholar 

  72. de Bekker-Grob EW, et al. Sample size requirements for discrete-choice experiments in healthcare: a practical guide. Patient. 2015;8:373–84.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Berg N. Non-response bias. 2005.

  74. Sedgwick P. Non-response bias versus response bias. BMJ. 2014. https://doi.org/10.1136/bmj.g2573.

    Article  PubMed  Google Scholar 

  75. Corry NH, et al. Assessing and adjusting for non-response in the Millennium Cohort Family Study. BMC Med Res Methodol. 2017;17:1–17.

    Article  Google Scholar 

  76. Huber J. CBC design for practitioners: what matters most. In: Sawtooth Software Conference. 2012. Sawtooth Software Orlando, FL.

  77. Marshall D, et al. Conjoint analysis applications in health—how are studies being designed and reported? An update on current practice in the published literature between 2005 and 2008. Patient. 2010;3:249–56.

    Article  PubMed  Google Scholar 

  78. Johnson FR, et al. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value Health. 2013;16(1):3–13.

    Article  Google Scholar 

  79. Bridges JF, et al. Prioritizing strategies for comprehensive liver cancer control in Asia: a conjoint analysis. BMC Health Serv Res. 2012;12:1–12.

    Article  Google Scholar 

  80. Huicho L, et al. Job preferences of nurses and midwives for taking up a rural job in Peru: a discrete choice experiment. PLoS One. 2012;7(12): e50315.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  81. Yang J-C, et al. Sample size and utility-difference precision in discrete-choice experiments: a meta-simulation approach. J Choice Modell. 2015;16:50–7.

    Article  Google Scholar 

  82. Ozdemir S, et al. Patient medication preferences for managing dry eye disease: the importance of medication side effects. Patient. 2022;15(6):679–90.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Vaanholt MC, et al. Are component endpoints equal? A preference study into the practice of composite endpoints in clinical trials. Health Expect. 2018;21(6):1046–55.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Mohamed AF, Hauber AB, Neary MP. Patient benefit-risk preferences for targeted agents in the treatment of renal cell carcinoma. Pharmacoeconomics. 2011;29:977–88.

    Article  PubMed  Google Scholar 

  85. Weernink MG, et al. Involving patients in weighting benefits and harms of treatment in Parkinson’s disease. PLoS One. 2016;11(8): e0160771.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Smith SM, et al. A multi-group analysis of online survey respondent data quality: comparing a regular USA consumer panel to MTurk samples. J Bus Res. 2016;69(8):3139–48.

    Article  Google Scholar 

  87. Callegaro M, et al. Online panel research: a data quality perspective. John Wiley & Sons; 2014.

    Book  Google Scholar 

  88. Hauber A, et al. A discrete-choice experiment of United Kingdom patients’ willingness to risk adverse events for improved function and pain control in osteoarthritis. Osteoarthritis Cartilage. 2013;21(2):289–97.

    Article  CAS  PubMed  Google Scholar 

  89. Elston DM. Participation bias, self-selection bias, and response bias. J Am Acad Dermatol. 2021. https://doi.org/10.1016/j.jaad.2021.06.025.

    Article  PubMed  PubMed Central  Google Scholar 

  90. Weernink MG, et al. Valuing treatments for Parkinson disease incorporating process utility: performance of best-worst scaling, time trade-off, and visual analogue scales. Value Health. 2016;19(2):226–32.

    Article  PubMed  Google Scholar 

  91. Tauscher JS, et al. Determinants of preference for telehealth versus in-person treatment for substance use disorders: a discrete choice experiment. J Substance Use Addict Treatment. 2023;146: 208938.

    Article  CAS  Google Scholar 

  92. Jonker MF, Roudijk B, Maas M. The sensitivity and specificity of repeated and dominant choice tasks in discrete choice experiments. Value Health. 2022;25(8):1381–9.

    Article  PubMed  Google Scholar 

  93. van den Broek-Altenburg E, Atherly A. Using discrete choice experiments to measure preferences for hard to observe choice attributes to inform health policy decisions. Heal Econ Rev. 2020;10(1):1–8.

    Google Scholar 

  94. van der Worp H, et al. Identifying women’s preferences for treatment of urinary tract infection: a discrete choice experiment. BMJ Open. 2021;11(11): e049916.

    Article  PubMed  PubMed Central  Google Scholar 

  95. Miners A, et al. Preferences for HIV testing services among men who have sex with men in the UK: a discrete choice experiment. PLoS Med. 2019;16(4): e1002779.

    Article  PubMed  PubMed Central  Google Scholar 

  96. Jonker MF. The garbage class mixed logit model: accounting for low-quality response patterns in discrete choice experiments. Value Health. 2022;25(11):1871–7.

    Article  Google Scholar 

  97. Gonzalez JM, et al. Did a bot eat your homework? An assessment of the potential impact of bad actors in online administration of preference surveys. PLoS One. 2023;18(10): e0287766.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  98. Mansfield C, Sutphin J, Gallaher K. Response quality in discrete-choice experiments: an extreme example of detecting fraud. The Patient. 2019;12(4):434–5.

    Google Scholar 

  99. Rydén A, et al. Discrete choice experiment attribute selection using a multinational interview study: treatment features important to patients with type 2 diabetes mellitus. The Patient. 2017;10:475–87.

    Article  PubMed  Google Scholar 

  100. Janssen EM, et al. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability? Expert Rev Pharmacoecon Outcomes Res. 2017;17(6):531–42.

    Article  PubMed  Google Scholar 

  101. Özdemir S, et al. Who pays attention in stated-choice surveys? Health Econ. 2010;19(1):111–8.

    Article  PubMed  Google Scholar 

  102. Finkelstein EA, et al. Understanding factors that influence the demand for dialysis among elderly in a multi-ethnic Asian society. Health Policy. 2018;122(8):915–21.

    Article  PubMed  Google Scholar 

  103. Kanninen BJ. Optimal design for multinomial choice experiments. J Mark Res. 2002;39(2):214–27.

    Article  Google Scholar 

  104. Coggon D, Barker D, Rose G. Epidemiology for the uninitiated. John Wiley & Sons; 2009.

    Google Scholar 

  105. Johnson FR, Yang J-C, Reed SD. The internal validity of discrete choice experiment data: a testing tool for quantitative assessments. Value Health. 2019;22(2):157–60.

    Article  PubMed  Google Scholar 

  106. Coast J, et al. Using qualitative methods for attribute development for discrete choice experiments: issues and recommendations. Health Econ. 2012;21(6):730–41.

    Article  PubMed  Google Scholar 

  107. Vass C, Rigby D, Payne K. The role of qualitative research methods in discrete choice experiments: a systematic review and survey of authors. Med Decis Making. 2017;37(3):298–313.

    Article  PubMed  PubMed Central  Google Scholar 

  108. Kløjgaard ME, Bech M, Søgaard R. Designing a stated choice experiment: the value of a qualitative process. J Choice Modell. 2012;5(2):1–18.

    Article  Google Scholar 

  109. Ostermann J, et al. Heterogeneous HIV testing preferences in an urban setting in Tanzania: results from a discrete choice experiment. PLoS One. 2014;9(3): e92100.

    Article  PubMed  PubMed Central  Google Scholar 

  110. Veldwijk J, et al. Words or graphics to present a discrete choice experiment: does it matter? Patient Educ Couns. 2015;98(11):1376–84.

    Article  PubMed  Google Scholar 

  111. Mühlbacher AC et al. How to present a decision object in health preference research: attributes and levels, the decision model, and the descriptive framework. The Patient-Patient-Centered Outcomes Research, 2024: p. 1–12.

  112. Marshall DA et al. Stated-preference survey design and testing in health applications. The Patient-Patient-Centered Outcomes Research, 2024: p. 1–11.

  113. Veldwijk J, et al. Mimicking real-life decision making in health: allowing respondents time to think in a discrete choice experiment. Value Health. 2020;23(7):945–52.

    Article  PubMed  Google Scholar 

  114. Ozdemir S. Improving the validity of stated-preference data in health research: the potential of the time-to-think approach. The Patient. 2015;8:247–55.

    Article  PubMed  Google Scholar 

  115. Özdemir S, Johnson FR, Hauber AB. Hypothetical bias, cheap talk, and stated willingness to pay for health care. J Health Econ. 2009;28(4):894–901.

    Article  PubMed  Google Scholar 

  116. Regier DA, et al. Demand for precision medicine: a discrete-choice experiment and external validation study. Pharmacoeconomics. 2020;38:57–68.

    Article  PubMed  Google Scholar 

  117. Aguiar M, et al. Designing discrete choice experiments using a patient-oriented approach. The Patient. 2021;14(4):389–97.

    Article  PubMed  Google Scholar 

  118. Watson V, Becker F, de Bekker-Grob E. Discrete choice experiment response rates: a meta-analysis. Health Econ. 2017;26(6):810–7.

    Article  PubMed  Google Scholar 

  119. Groves RM, Presser S, Dipko S. The role of topic interest in survey participation decisions. Public Opin Q. 2004;68(1):2–31.

    Article  Google Scholar 

  120. Tolonen H, et al. Effect on trend estimates of the difference between survey respondents and non-respondents: results from 27 populations in the WHO MONICA Project. Eur J Epidemiol. 2005;20:887–98.

    Article  PubMed  Google Scholar 

  121. Rockwood K, et al. Response bias in a health status survey of elderly people. Age Ageing. 1989;18(3):177–82.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We would like to thank Marieke Weernink for her comment in an earlier version of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Semra Ozdemir.

Ethics declarations

Conflict of Interest

The authors declare no conflicts of interest.

Funding

The authors did not receive support from any organization for the submitted work.

Author contribution

All authors contributed to the conception and drafting of the paper. All authors read and approved the final manuscript.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ozdemir, S., Quaife, M., Mohamed, A.F. et al. An Overview of Data Collection in Health Preference Research. Patient (2024). https://doi.org/10.1007/s40271-024-00695-6

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40271-024-00695-6

Navigation