Abstract
Background
Research has outlined the benefits and costs of online health services, but these studies have typically focused on a specific geographic region or disease. Very few studies have estimated consumer demand for online health services.
Objective
This study estimated household’s willingness to pay (WTP) for the ability to receive remote diagnosis, treatment, monitoring and consultations online (telehealth).
Methods
WTP was estimated with a random utility model and household data from a US survey employing repeated discrete-choice experiments.
Results
The representative household was willing to pay $US4.39 per month for telehealth. This valuation increased to $US5.85 for households with higher opportunity costs, as measured by income, and to $US6.22 for households living more than 20 miles away from their nearest medical facility.
Conclusion
WTP estimates offer insights into the potential benefits from policies intended to promote the expansion of online health services into underserved areas. These include the Federal Communications Commission’s Rural Healthcare Pilot Program and the Department of Agriculture’s Distance Learning and Telemedicine Grants programme.
Similar content being viewed by others
Notes
After completing the survey, a small sub-sample of respondents provided additional open-ended comments on the individual questions, choice experiments and methodology. None of these comments indicated a lack of understanding of the descriptions of the service features in the choice scenarios.
The data from the follow-up question was still a (hypothetical) stated preference from a choice of having their actual service or switching to the new service. Including this additional data in the estimation method helps ground the predicted choices from the model in reality and can produce efficiency gains because of increased sample information.
To account for the possibility of order effects that could confound the analysis, the order of the eight A–B choice questions in the two choice tasks was randomly assigned across all respondents.
The panel tenure in months for sample households ranged from 1 to 121 with a mean of 37.72 and standard deviation of 27.14. GfK know if a household previously had Internet service and the type of service. We used this information for another experiment to oversample households with <12 months of panel experience and who did not have Internet service prior to recruitment. About 800 panel members fulfilled this criteria, and 472 were included in our sample. See Dennis [7] for a more detailed description of GfK’s sampling methodology.
Conventional bivariate probit, such as the model estimated by Stata’s biprobit command, does not estimate the variances for the two separate sources of choice data. We wrote the likelihood for our model with the programming language Gauss to permit identification of the ratio of the standard deviation of the errors in evaluating the SQ alternative to the errors in evaluating the pure hypothetical alternatives (i.e. the parameter λ in Table 3).
There are at most 5921 respondents with information on demographics and at least some of the eight A–B choices and the follow-up SQ versus A or B question. The starting maximum sample size for econometric estimation is effectively n = 5921 × 8 = 47,368.
Since the WTP estimates are nonlinear functions of the structural parameters from the RUM, their exact standard errors for the purpose of hypothesis testing are unknown. We used the delta method to obtain standard errors for the WTP estimates. For further details, see the “Appendix A”, and Savage and Waldman [14].
The parameter λ is generally estimated to be close to or greater than one in all models. We report its estimate and the corresponding test statistic for the full sample model but do not discuss it further.
By using subsamples to estimate preference heterogeneity rather than interacting demographics with telehealth, we did not constrain the parameters of the other variables to be equal, which is a potential model misspecification. In addition, our general specification permits comparisons with the non-telehealth features across different demographic groups.
The sum of the rural and urban subsamples of 5703 is less than 5921 because we did not have location data for these 218 respondents.
References
Universal Service Administrative Company. Annual report 2012. Washington, D.C.: Universal Service Administrative Company; 2012.
US Department of Agriculture. FY 2009 budget summary and annual performance plan. Washington, D.C.: US Department of Agriculture; 2009.
Mühlbacher A, Johnson F. Choice experiments to quantify preferences for health and healthcare: state of the practice. Appl Health Econ Health Policy. 2016;14:253–66.
Train K. Discrete choice methods with simulation. 2nd ed. New York: Cambridge University Press; 2009.
Maddala GS. Limited-dependent and qualitative variables in econometrics. Cambridge: Cambridge University Press; 1983.
Huber J, Zwerina K. The importance of utility balance in efficient choice designs. J Mark Res. 1996;33:307–17.
Dennis M. Description of within-panel survey sampling methodology: the knowledge networks approach. Knowledge Networks: Government and Academic Research; 2009.
GfK. KnowledgePanel® Demographic Profile July 2009. Government and Academic Research, Knowledge Networks; 2009.
US Census Bureau. American factfinder. Washington, D.C.: US Census Bureau; 2009.
Thurstone L. A law of comparative judgment. Psychol Rev. 1927;34:273–86.
McFadden D. Conditional logit analysis of qualitative choice behavior. In: Zarembka P, editor. Frontiers in econometrics. New York: Academic Press; 1973. p. 105–42.
Grigsby W, Goetz S. Telehealth: what promise does it hold for rural areas—critical issues in rural health. Ames (IA): Blackwell Publishing Ltd; 2004.
Maheu M, Whitten P, Allen A. E-health, telehealth, and telemedicine: a guide to start-up and success. San Francisco: Jossey-Bass; 2001.
Savage S, Waldman D. Learning and fatigue during choice experiments: a comparison of online and mail survey modes. J Appl Econ. 2008;23:351–71.
Forman C, Goldfarb A, Greenstein S. how did location affect adoption of the commercial internet? Global village vs. urban leadership. J Urban Econ. 2005;58:389–420.
GeoLytics, Inc. CensusDVD long form (SF3). East Brunswick: GeoLytics, Inc.; 2010.
Berman A, Fenaughty M. Technology and managed care: patient benefits of telemedicine in a rural health care network. Health Econ. 2005;14(6):559–73.
Cauley S. The time price of medical care. Rev Econ Stat. 1987;69(1):59–66.
Grandchamp C, Gardiol L. Does a mandatory telemedicine call prior to visiting a physician reduce costs or simply attract good risks? Health Econ. 2011;20(10):1257–67.
Cotet A, Benjamin D. Medical regulation and health outcomes: the effect of the physician examination requirement. Health Econ. 2013;22(4):393–409.
Hitt L, Tambe P. Broadband adoption and content consumption. Info Econ Policy. 2007;19(3–4):362–78.
Sloan F, Hsieh C. Health economics. Cambridge: Massachusetts Institute of Technology; 2012.
Kling C, Sexton R. Bootstrapping in applied welfare analysis. Am J Agric Econ. 1990;72(2):406–18.
Krinsky I, Robb A. On approximating the statistical properties of elasticities. Rev Econ Stat. 1986;68(4):715–9.
Morey E, Waldman D. Functional form and the statistical properties of welfare measures: a comment. Am J Agric Econ. 1994;76(4):954–7.
Acknowledgements
The authors thank Martin Byford, Yongmin Chen, Jin-Hyuk Kim, Greg Rosston, Jennifer Thatcher, Scott Wallsten, Bradley Wimmer, the journal editor and two anonymous referees, and seminar participants at the University of Colorado for helpful comments and contributions. Kenton White provided valuable research assistance.
Author contributions
Each of the three co-authors contributed equally to and worked on all aspects of the research.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
This study was performed in accordance with the ethical standards of the Declaration of Helsinki. Informed consent was obtained from all individual participants in this study.
Funding
This paper was funded by a contract from the FCC. The funder played no other role in the study. The opinions expressed in the paper are those of the authors and not the FCC.
Conflict of interest
Professor Savage, Professor Waldman and Professor Chang have no conflicts of interest associated with this research.
Appendices
Appendix A
1.1 Estimating the Standard Error of WTP Measures from Discrete Choice Experiments
Ignoring interactions, the utility model for Internet access choice is
where p ij is price, b ij is Telehealth, and β a is a K × 1 vector of attributes of the service other than price and Telehealth. The estimates of WTP for these attributes are \(\hat{\varvec{\beta }}_{a} /\hat{\beta }_{p}\) and the estimated WTP for Telehealth is \(\hat{\omega }_{b} = \hat{\beta }_{s} /\hat{\beta }_{p}\).
Since the estimates of willingness-to-pay are nonlinear functions of parameter estimates, their exact standard errors are unknown. While it would be possible to bootstrap the distribution of these estimators, since the normally distributed estimator of β p is the denominator, the simulation would not converge to anything useful [23, 24]. Instead, we use linear approximation to the variance (sometimes known as the “delta method”). This approximation for elasticities has been examined in [25].
Define the (K + 1) × 1 vector
Define the (K + 2) × 1 vector of parameter estimates \(\hat{\varvec{\theta }} = \left( {\hat{\beta }_{p} \vdots \hat{\varvec{\beta }}_{a}^{\prime } \vdots \hat{\beta }_{s} } \right)^{\prime }\). Let \({\hat{\mathbf{\varSigma }}}\) be the estimated variance-covariance matrix of \(\hat{\varvec{\theta }}\). The linear approximation to the variance of is \(\hat{\varvec{\omega }}\)
where the derivatives are evaluated at the parameter estimates. The square root of the diagonal elements of \(\hat{V}\left( {\hat{\varvec{\omega }}} \right)\) are the estimated standard errors of the estimates of WTP. These derivatives are
Focusing on Telehealth, the estimated variance of the WTP for Telehealth from Eq. 3 is
Appendix B
Rights and permissions
About this article
Cite this article
Chang, J., Savage, S.J. & Waldman, D.M. Estimating Willingness to Pay for Online Health Services with Discrete-Choice Experiments. Appl Health Econ Health Policy 15, 491–500 (2017). https://doi.org/10.1007/s40258-017-0316-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40258-017-0316-z