Skip to main content
Log in

Adjusting and Calibrating Elicited Values Based on Follow-up Certainty Questions: A Meta-analysis

  • Published:
Environmental and Resource Economics Aims and scope Submit manuscript

Abstract

Researchers have proposed many methods to reduce hypothetical bias (HB) in stated preference studies. One of the earliest and most popular is Certainty follow-up, in which the respondent states how sure they are of their response to the valuation question they just responded to. Certainty follow-up enables the use of several cutoffs to calibrate for HB, whereas the efficacy of other popular HB mitigation methods, such as Cheap Talk, have no such flexibility. Even given a cutoff level, its ability to reduce HB may vary with characteristics of the Certainty follow-up and of the study. Using a meta-analysis, we find that Certainty follow-up is more effective than Cheap Talk at adjusting for potential HB and that value elicitation method, mode of data collection, as well as whether other HB mitigation methods are used in a study could affect Certainty follow-up efficacy. Using and recoding Certainty follow-up questions quantitatively or qualitatively can be equally effective when compared to unadjusted hypothetical values where potential HB may occur or to values elicited with real binding conditions in which the actual magnitude of the HB is known. There is strong evidence that HB can be completely calibrated for or even overcorrected, but we encourage more Certainty follow-up studies with binding elicitations to fully explore the potential of this method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. While real value in our paper refers to elicitations that involve a real money transaction, admittedly, this may not reflect the actual or accurate demand (Ariely et al. 2003).

  2. Earlier works using Certainty adjustments exist (e.g., Johannesson 1993; Bemmaor 1995).

  3. Some studies have incorrectly referred to Champ et al. (1997)’s recoding method as ASUM. It could be viewed as a special case of ASUM in that first, it generates definitive yes and no responses without assigning them a probability and second it is the most conservative by reweighing all yes with less than 100% certainty to a no answer.

  4. Also referred to as multiple-bounded uncertainty choice (MBUC). See Mahieu et al. (2014) for a comprehensive literature review of this method.

  5. Some studies report economic values per sub-sample such as WTP for those who are definitely sure and those who are probably sure. While Certainty adjustment is expected to bring down the overall mean of the estimated values, subgrouping is not a type of Certainty adjustment thus the values reported for individuals who are sure or probably sure respectively do not necessarily generate values representing the entire calibrated sample average. For instance, by allowing an unsure answer to the valuation question, Champ et al. (2005) changes a conventional dichotomous question to a trichotomous question. These studies are excluded.

  6. Articles listed in “Appendix B”. Another situation is that some studies (e.g., Johannesson et al. 1998) report a parametric estimate of WTP as well as a table summarizing elicitation outcomes with enough information to also generate a Turnbull estimate. In these cases, we use both estimates.

  7. Exclusion appendices are: C-implemented Certainty follow-up, but only reports calibrated economic values; D-used MBDC/polychotomous-style elicitations; E-reported with and without certainty, but not enough information to generate economic values; F-articles excluded for other reasons, and G-are only citations or had brief mentions of the aforementioned seminal articles as well as review articles (no empirical results), or unrelated articles.

  8. Interaction variables between a non-10-point scale indicator and the quantitative recoding thresholds were used to control for any differences due to conversion. Several of these interactions were significant and generally show that the effect of Certainty adjustment for a particular recode level decreases with increasing scale.

  9. “Appendix I” contains the table of conversions. Too few studies use the top level of a 5-point qualitative scale as a cutoff, so are grouped into variable Qual8.

  10. Such studies listed in “Appendix J”. A discerning reader may know that some of these methods recalibrate economic values through recoding (e.g., Statistical Calibration Function, CE methods). To be more precise, we define recoding as those who recalibrate using observed certainty thresholds naively. Model calibration methods incorporate Certainty adjustment data but using a statistical or mathematical mechanism to calibrate/modify responses are not characterized as recoding.

  11. The difference between Dichotomous Choice and Referendum is payment context. Provision of the good or service does not rely on other’s votes in Dichotomous Choice. In a referendum, an individual’s decision is part of a group-vote so that if the referendum passes, the respondent must follow the group’s collective choice even if they voted to the contrary.

  12. We considered controlling for auction mechanisms but found no such study that utilized Certainty adjustment.

  13. We also considered a geographic control (e.g., non-US studies) and if the study is peer-reviewed following Stanley et al. (2013). Neither variable was significant in any model specification and thus are omitted from further discussion.

  14. To calculate AF for WTA, we reverse the numerator and denominator of Eq. (1).

  15. In some cases, the sample size changed between the calibrated and uncalibrated WTP, presumably because some respondents did not answer the Certainty follow-up question. To be conservative, we assume the lower sample size associated with calibrated WTP.

  16. This excludes eight observations determined to be outliers using several methods (Median Absolute Deviation, Cook's D, and excess standardized residuals), six of which featured AF exceeding 20. Additionally, none of the explanatory variables were statistically significant in models including these outliers. A similar process excluded two observations in the CF dataset. The model results featuring these outliers appear in Table A2.

  17. Supplemental results are provided in the “Appendix (Table A1)” using the 95% and 90% samples as well, each consistently demonstrating a significant effect size.

  18. In our analysis of ex-ante Cheap Talk scripts, we further explored two separate models: (1) distinguishing between scripts primarily giving budget/substitute reminders versus those with true Cheap Talk, which explicitly warn about hypothetical bias/yeah-saying behavior; and (2) delineating between short scripts (22–64 words) and long scripts (112–390 words). These respectively analyses show that the use of either budget/substitute reminders or short scripts have no effect on certainty AF.

  19. Beyond differentiating due to potential free-riding, a secondary reason stems from familiarity with the good, which can dictate the extent of HB (Schläpfer and Fischhoff 2012). As such, an auxiliary formulation is explored, dummying for environment, health, and other types of goods relative to food-related goods. This specification also demonstrated no significant differences in all four models.

  20. Another possible tactic but then deemed inappropriate in the current analysis is simplifying the dataset to an equal number of observations (often one) across studies because it would eliminate a major contribution of this study, the ability to compare the efficacy of Certainty adjustment (e.g., Quant8 versus Quant7) within a study.

  21. The calculation uses square root of sample size for the FAT-PET estimator and sample size for the PEESE estimator.

References

  • Akter S, Brouwer R, Brander L, van Beukering P (2009) Respondent uncertainty in a contingent market for carbon offsets. Ecol Econ 68:1858–1863

    Article  Google Scholar 

  • Ariely D, Loewenstein G, Prelec D (2003) “Coherent arbitrariness”: stable demand curves without stable preferences. Q J Econ 118:73–106

    Article  Google Scholar 

  • Arrow K, Solow R, Portney P, Leamer E, Radner R, Schuman H (1993) Report of the NOAA panel on contingent valuation. Federal Register, Washington DC

    Google Scholar 

  • Bemmaor AC (1995) Predicting behavior from intention-to-buy measures: The parametric case. J Mark Res 32:176–191

    Article  Google Scholar 

  • Broadbent CD (2014) Evaluating mitigation and calibration techniques for hypothetical bias in choice experiments. J Environ Plan Manag 57:1831–1848

    Article  Google Scholar 

  • Beck MJ, Fifer S, Rose JM (2016) Can you ever be certain? Reducing hypothetical bias in stated choice experiments via respondent reported choice certainty. Transport Res B-Meth 89:149–167

    Article  Google Scholar 

  • Blomquist GC, Blumenschein K, Johannesson M (2009) Eliciting willingness to pay without bias using follow-up certainty statements: comparisons between probably/definitely and a 10-point certainty scale. Environ Resour Econ 43:473–502

    Article  Google Scholar 

  • Blumenschein K, Johannesson M, Blomquist GC, Liljas B, O’Conor RM (1998) Experimental results on expressed certainty and hypothetical bias in contingent valuation. South Econ J 65:169–177

    Google Scholar 

  • Blomquist GC, Dickie M, O’Conor RM (2011) Willingness to pay for improving fatality risks and asthma symptoms: values for children and adults of all ages. Resour Energy Econ 33:410–425

    Article  Google Scholar 

  • Brouwer R, Dekker T, Rolfe J, Windle J (2010) Choice certainty and consistency in repeated choice experiments. Environ Resour Econ 46:93–109

    Article  Google Scholar 

  • Champ PA, Alberini A, Correas I (2005) Using contingent valuation to value a noxious weeds control program: the effects of including an unsure response category. Ecol Econ 55:47–60

    Article  Google Scholar 

  • Champ PA, Bishop RC (2001) Donation payment mechanisms and contingent valuation: an empirical study of hypothetical bias. Environ Resource Econ 19:383–402

    Article  Google Scholar 

  • Champ PA, Bishop RC, Brown TC, McCollum DW (1997) Using donation mechanisms to value nonuse benefits from public goods. J Environ Econ Manag 33:151–162

    Article  Google Scholar 

  • Champ PA, Moore R, Bishop RC (2009) A comparison of approaches to mitigate hypothetical bias. Agric Resour Econ Rev 38:166–180

    Article  Google Scholar 

  • Ekstrand ER, Loomis J (1998) Incorporating respondent uncertainty when estimating willingness to pay for protecting critical habitat for threatened and endangered fish. Water Resour Res 34:3149–3155

    Article  Google Scholar 

  • Haab TC, McConnell KE (2002) Valuing environmental and natural resources: the econometrics of non-market valuation. Edward Elgar Publishing, London

    Book  Google Scholar 

  • Huth WL, Morgan OA (2011) Measuring the willingness to pay for cave diving. Mar Resour Econ 26:151–166

    Article  Google Scholar 

  • Hanley N, Kriström B, Shogren JF (2009) Coherent arbitrariness: on value uncertainty for environmental goods. Land Econ 85:41–50

    Article  Google Scholar 

  • Johannesson M (1993) Willingness to pay for antihypertensive therapy-further results. J Health Econ 12:95–108

    Article  Google Scholar 

  • Johannesson M, Blomquist GC, Blumenschein K, Johansson P-O, Liljas B, O’Conor RM (1999) Calibrating hypothetical willingness to pay responses. J Risk Uncertain 18:21–32

    Article  Google Scholar 

  • Johnston RJ, Ranson MH, Besedin EY, Helm EC (2006) What determines willingness to pay per fish? A meta-analysis of recreational fishing values. Mar Resour Econ 21:1–32

    Article  Google Scholar 

  • Li CZ, Mattsson L (1995) Discrete choice under preference uncertainty: an improved structural model for contingent valuation. J Environ Econ Manag 28:256–269

    Article  Google Scholar 

  • Loomis JB (2012) Comparing households’ total economic values and recreation value of instream flow in an urban river. J Environ Econ Policy 1:5–17

    Article  Google Scholar 

  • List JA, Gallet CA (2001) What experimental protocol influence disparities between actual and hypothetical stated values? Environ Resour Econ 20:241–254

    Article  Google Scholar 

  • Lundhede TH, Olsen SB, Jacobsen JB, Thorsen BJ (2009) Handling respondent uncertainty in choice experiments: evaluating recoding approaches against explicit modelling of uncertainty. J Choice Model 2:118–147

    Article  Google Scholar 

  • Manski C (1995) Identification problems in the social sciences. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Morrison M, Brown TC (2009) Testing the effectiveness of certainty scales, cheap talk, and dissonance-minimization in reducing hypothetical bias in contingent valuation studies. Environ Resour Econ 44:307–326

    Article  Google Scholar 

  • Mitani Y, Flores NE (2014) Hypothetical bias reconsidered: payment and provision uncertainties in a threshold provision mechanism. Environ Resour Econ 59:433–454

    Article  Google Scholar 

  • Martínez-Espiñeira R, Lyssenko N (2012) Alternative approaches to dealing with respondent uncertainty in contingent valuation: A comparative analysis. J Environ Manage 93:130–139

    Article  Google Scholar 

  • Mahieu PA, Riera P, Kriström B, Brännlund R, Giergiczny M (2014) Exploring the determinants of uncertainty in contingent valuation surveys. J Environ Econ Policy 3:186–200

    Article  Google Scholar 

  • Makriyannis C, Johnston RJ, Whelchel AW (2018) Are choice experiment treatments of outcome uncertainty sufficient? An application to climate risk reductions. Agric Resour Econ Rev 47:1–33

    Article  Google Scholar 

  • Mattmann M, Logar I, Brouwer R (2019) Choice certainty, consistency, and monotonicity in discrete choice experiments. J Environ Econ Policy 8:109–127

    Article  Google Scholar 

  • Murphy JJ, Allen PG, Stevens TH, Weatherhead D (2005) A meta-analysis of hypothetical bias in stated preference valuation. Environ Resour Econ 30:313–325

    Article  Google Scholar 

  • National Oceanic and Atmospheric Administration (1994) Natural resource damage assessment: proposed rules

  • Penn JM, Hu W (2019) Cheap talk efficacy under potential and actual hypothetical bias: a meta-analysis. J Environ Econ Manag 96:22–35

    Article  Google Scholar 

  • Penn JM, Hu W (2018) Understanding hypothetical bias: an enhanced meta-analysis. Am J Agr Econ 100:1186–1206

    Article  Google Scholar 

  • Penn JM, Hu W (2021) The extent of hypothetical bias in willingness to accept. Am J Agr Econ 103:126–141

    Article  Google Scholar 

  • Roe B, Boyle KJ, Teisl MF (1996) Using conjoint analysis to derive estimates of compensating variation. J Environ Econ Manag 31:145–159

    Article  Google Scholar 

  • Poe GL, Clark JE, Rondeau D, Schulze WD (2002) Provision point mechanisms and field validity tests of contingent valuation. Environ Resour Econ 23:105–131

    Article  Google Scholar 

  • Ready RC, Navrud S, Dubourg WR (2001) How do respondents with uncertain willingness to pay answer contingent valuation questions? Land Econ 77:315–326

    Article  Google Scholar 

  • Ready RC, Champ PA, Lawton JL (2010) Using respondent uncertainty to mitigate hypothetical bias in a stated choice experiment. Land Econ 86:363–381

    Article  Google Scholar 

  • Rhodes RJ, Whitehead JC, Smith TIJ, Denson MR (2018) A benefit-cost analysis of a red drum stock enhancement program in South Carolina. J Benefit Cost Anal 9:323–341

    Article  Google Scholar 

  • Schmidt J, Bijmolt THA (2020) Accurately measuring willingness to pay for consumer goods: a meta-analysis of the hypothetical bias. J Acad Mark Sci 48:499–518

    Article  Google Scholar 

  • Stanley TD, Doucouliagos H (2012) Meta-regression analysis in economics and business. Routledge, London

    Book  Google Scholar 

  • Shaikh SL, Sun L, van Kooten GC (2007) Treating respondent uncertainty in contingent valuation: a comparison of empirical treatments. Ecol Econ 62:115–125

    Article  Google Scholar 

  • Stanley TD, Doucouliagos, (2014) Meta-regression approximations to reduce publication selection bias. Res Synth Methods 5:60–78

    Article  Google Scholar 

  • Tuncel T, Hammitt JK (2014) A new meta-analysis on the WTP/WTA disparity. J Environ Econ Manag 68:175–187

    Article  Google Scholar 

  • Vossler CA, Ethier RG, Poe GL, Welsh MP (2003) Payment certainty in discrete choice contingent valuation responses: results from a field validity test. South Econ J 69:886–902

    Google Scholar 

  • Welsh MP, Poe GL (1998) Elicitation effects in contingent valuation: comparisons to a multiple bounded discrete choice approach. J Environ Econ Manag 36:170–185

    Article  Google Scholar 

  • Whitehead JC, Cherry TL (2007) Mitigating the hypothetical bias of willingness to pay: a comparison of ex-ante and ex-post approaches. Resour Energy Econ 29:247–261

    Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to Glenn Blomquist for inspiring this work. Chadsity Robbins and Macy Hagan are appreciated for their assistance in data collection. Parts of the work in this manuscript was completed while both authors were at the University of Kentucky.

Funding

This research was partially funded by the University of Kentucky and associated with Hatch Project LAB94426.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wuyang Hu.

Ethics declarations

Conflict of interest

There is no known conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 396 kb)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Penn, J., Hu, W. Adjusting and Calibrating Elicited Values Based on Follow-up Certainty Questions: A Meta-analysis. Environ Resource Econ 84, 919–946 (2023). https://doi.org/10.1007/s10640-022-00742-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10640-022-00742-6

Keywords

JEL Classification

Navigation