Skip to main content

Analysis of Choice Questions

  • Chapter
Valuing Oil Spill Prevention

Abstract

In this chapter the choices made by respondents in the main survey are used to construct a lower-bound estimate of the ex ante total value for preventing the expected harm from oil spills along the California Central Coast over the next decade. The relationships between the choice measure and other respondent characteristics measured by the survey are also examined. Section 6.2 presents two versions of the choice measure. Section 6.3 discusses the non-parametric (Turnbull, 1976) statistical framework used in much of our analysis of the estimate of ex ante total value. Section 6.4 provides the Turnbull lower bound estimate on the sample mean1 and examines the sensitivity of this estimate to various assumptions regarding the treatment of the data. Using the categories suggested by the NOAA Panel2 as a framework, section 6.5 examines the bivariate relationships between choice measures and respondent characteristics. Section 6.6 examines construct validity using a multivariate counterpart to the evaluations of individual variables reported in the prior section. Section 6.7 provides a sensitivity analysis that looks at possible shifts in value related to respondent assumptions at variance with key scenario features. Finally, section 6.8 presents the most conservative treatment of the respondents who said that they did not pay California income taxes and its impact on the total value estimate.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. See Appendix F for a more detailed discussion of the Turnbull estimator and Appendix I for a comparative analysis of COS and the Exxon Valdez survey results.

    Google Scholar 

  2. In a 65 version questionnaire, the interviewer did not circle an answer category at B-1. Given the nature of the comments recorded verbatim by the interviewer at B-1 (“I’m not going to answer that… can’t say for or against”),the B-1 response for this case was coded as not sure. All choice measure variables are denoted in bold capital letters.

    Google Scholar 

  3. The null hypothesis tested in a chi-squared (y2) test is that the rows and columns in a twoway table are independent.

    Google Scholar 

  4. The seminal paper on the use of binary discrete choice data in contingent valuation is Bishop and Heberlein (1979). Hanemann (1984) developed the utility-theoretic approach to such models. Cameron and James (1986) look at choice data using an approach based on the willingness-to-pay function. McConnell (1990) compares the two approaches.

    Google Scholar 

  5. We assume that no respondent would demand compensation for implementing a program to prevent oil spills along the Central Coast, i.e.,no respondent has a negative WTP.

    Google Scholar 

  6. This is in keeping with the NOAA Panel’s recommendation: “Generally, when aspects of the survey design and the analysis of the responses are ambiguous, the option that tends to underestimate willingness to pay is preferred” (Arrow et al.,1993, p. 4612).

    Google Scholar 

  7. Respondents who voted against the program at B-1 were also given an opportunity to reconsider their vote at D-15.

    Google Scholar 

  8. The initial uses of this framework in the CV literature are found in Carson and Steinberg (1990) and Kristrom (1990). Haab and McConnell (1997; 2002) provide further development. See Carson, Willis, and Imber (1994) for a large-scale application. Appendix F provides a detailed discussion of the Turnbull estimator.

    Google Scholar 

  9. For example, if 20% of the sample is estimated to be in the interval $25 to $65, the lower-bound mean is calculated by assuming that this 20% of the sample is willing to pay exactly $25.

    Google Scholar 

  10. For example, if 20% of the sample is estimated to be in the interval $25 to $65, the upper-bound mean is calculated by assuming that this 20% of the sample is willing to pay $65. As the high end-point in the last interval, $220–00, is infinity, the upper-bound mean is infinite unless reasonable additional assumptions are imposed. As we are asking about WTP, it would be possible to substitute for infinity an upper bound based on either income or wealth. See Section 2 in Appendix F.

    Google Scholar 

  11. In large but finite random samples such as the one used for this study, the number of respondents receiving each tax amount is approximately equivalent. The standard error of the estimate reflects the sampling variation.

    Google Scholar 

  12. The values shown in the change in density column are the percentage of respondents who fall into each interval. For example, 12.0 percent of respondents fall into the $5—$25 interval; and, hence, the Turnbull assumes 12.0 percent are willing to pay $5. The z-statistics for the five change in density parameters estimated by the model are 9.93, 2.61, 1.80, 1.69, and 2.41, respectively. The significance of each individual parameter value is of little importance; the set of parameters taken together, however, is reflected in the standard error of the estimate. The standard error of $3.90 suggests reasonable precision in the estimate.

    Google Scholar 

  13. The Turnbull estimate of the lower bound on the sample mean using the sample weights (see Appendix B.10) is $85.50, only $0.11 higher than the unweighted sample estimate. The standard error of the weighted estimate is $3.84.

    Google Scholar 

  14. True zeros are correctly taken into account in the calculation of the Turnbull lower bound on the sample mean, while nay-sayers who do have a positive WTP for the good under the scenario depicted will bias the Turnbull lower-bound on the sample mean downward. The presence of yea-sayers will bias the Turnbull lower bound on the sample mean upward.

    Google Scholar 

  15. The log-likelihoods for the log-normal model and the log-normal model with a spike at zero are —711.31 and —709.81, respectively. For the Weibull and Weibull spike models the log-likelihoods are —710.54 and —709.67, respectively.

    Google Scholar 

  16. The log-likelihoods for the log-normal with a spike at zero, the Weibull spike model, the Box- Cox model and the Turnbull are —709.81, —709.67, —709.63, and —709.48, respectively.

    Google Scholar 

  17. The parameter estimates and z-statistics for the variables included in the construct validity model below (see Table 6.7) provide additional information about the extent to which each variable, controlling for the other variables in the equation, influences the percentage who voted for the program and in what direction this influence is exerted.

    Google Scholar 

  18. There is a long history of estimating construct validity equations in CV studies, see, e.g.,Knetsch and Davis (1966). For an example involving oil spill prevention, see Carson et al. (1992).

    Google Scholar 

  19. With 131CH as the dependent indicator variable, the simple probit model yields a constant term of 0.3483 (0.0578) and a slope coefficient on B1AMT of —0.0044 (0.0005), where the standard errors are in parentheses.

    Google Scholar 

  20. A Weibull choice model rather than the Box-Cox probit model used in Table 6.7 yields the same basic results; see Appendix H, Table 1.

    Google Scholar 

  21. This high correlation between the Box-Cox parameter k and the linear coefficient on a variable being scaled (BIAMT) has long been noted in the biometrics literature on fitting dose-response models (Morgan, 1992).

    Google Scholar 

  22. Assuming 2 was known a priori to be 0.3424, its estimated value in Table 6.7, the t-statistic on the transformed B1AMT would be —10.40 (p 0.001).

    Google Scholar 

  23. Note that Table 6.7 reports p-values for two-sided hypothesis tests. In most instances, the hypothesis about the coefficient on a particular test is of the one-sided form (e.g.,a null hypothesis that respondents who do not think the program works are as likely to vote for the program as other respondents versus the alternative that they are less likely). For one-sided hypothesis tests, the reported (two-sided) p-values should be divided by 2.

    Google Scholar 

  24. Missing values for income (n = 86) have been replaced with an estimate based on the median income in the 1993 zip code, housing type, education, gender, race, age, and qualitative variables for the number of employed adults in the household. Tables 2 and 3 in Appendix H present more detailed definitions of the variables included in the income prediction equation and the model for estimating income, respectively. Excluding from the sample the households who did not report income does not change the sign or significance of the income measure or the role of any other variables; see Table 4 in Appendix H. It does reduce the sample from 1085 to 999, so the p-values for some of the tests for relationships between these variables and respondents’ choices necessarily decrease somewhat.

    Google Scholar 

  25. If one allows the income coefficient to vary with the level of household income, the effect of log income on the probability of favoring the program is still positive and significant with a p-value of 0.006. This result holds regardless of the treatment of missing values for income. The one-variable income specification can be rejected in favor of the two-variable specification used here using a likelihood-ratio test x2(11) = 2.86, p = 0.09). If LINC1 in turn is split into two categories, one consisting of those households with income greater than the median California household income and one consisting of those below, the estimated income effect in the second category (LINC2) is still smaller but not significantly so.

    Google Scholar 

  26. Note that the other program asked about in the A-2 series, spending on prisons, was not included in LOWSPEND as its inclusion resulted in perfect failure (i.e.,all of the respondents who meet this more inclusive criterion voted not for the program).

    Google Scholar 

  27. The absolute value of the coefficient on LESSHARM is almost twice that of MOREHARM. However, the percentage of respondents giving a MOREHARM answer is more than double that of those giving a LESSHARM answer.

    Google Scholar 

  28. Normalization of a variable is accomplished by subtracting the variable’s mean value from each observation in the data set and then dividing by the variable’s standard deviation. The normalized variable has a mean value of zero and a standard deviation of one. The value of a normalized variable is interpreted in terms of the number of standard deviations from the mean.

    Google Scholar 

  29. LINC1 and LINC2 have been added together to form LINC, the log of the respondent’s household income. There is no gain in the cluster approach to using two income variables as the clustering algorithm can perform its partitioning at any point along the income distribution.

    Google Scholar 

  30. The choice of k in a cluster analysis is largely dependent upon the purpose for which the analysis is intended and the nature of the data being clustered. Allowing too few clusters can suppress key detail in the data. Allowing too many clusters makes interpretation difficult and eventually will largely reproduce the regression results already provided in Table 6.7. We have chosen k equal to four as a compromise. Much of the same insight is gained if k is equal to three or k is equal to five.

    Google Scholar 

  31. Because the B1AMT’s were assigned to respondents independently of their characteristics, and B1AMT was not used in determining the clusters, it is meaningful to look at the average probability of a for vote across the different clusters.

    Google Scholar 

  32. The basic set of Box-Cox parameters are, as in Table 6.7, highly correlated with each other; this correlation is responsible for their fairly small overall z-statistics. In a probit model of the cluster indicators with either B1AMT or log(B1AMT) as the stimulus variable, the z-statistic on the stimulus variable is over 9 (p 0.001). The linear form of the model in Table 6.10 can be rejected using a likelihood ratio test at p = 0.02 and the log form of the model at p = 0.01.

    Google Scholar 

  33. The median is the point in the WTP distribution above which 50 percent of the respondents are predicted to be willing to pay more and below which 50 percent are willing to pay less.

    Google Scholar 

  34. In contrast, the estimate of mean WTP is quite sensitive to distributional assumptions and tends to be dominated by the assumption that is made regarding a very small percentage of observations in the right tail of the distribution. An additional issue with using the Box-Cox model in Table 6.7 (Collins, 1991) is that there are a number of technical difficulties associated with inverting that model to get mean estimates, particularly for individual observations with predicted estimates close to zero. If one sets to zero the predicted values that are close to zero, the changes in the predicted Box-Cox means are generally similar in magnitude to the changes in the predicted medians reported in this section. The Turnbull estimate of the lower bound on the sample mean used in earlier sections of this chapter avoids all of these difficulties. The Turnbull framework, does however, have the disadvantage (relative to the Box-Cox) that it is neither computationally or straightforward to generalize that framework to look at the implications of changes in particular covariate values while holding other covariate values constant. One can, of course, use the Turnbull framework to look at the differences in WTP based on any rule that divides observations into a small number of finite groups as presented earlier.

    Google Scholar 

  35. For the B1CHNT choice measure, the Turnbull estimate of the lower bound on the sample mean using the sample weights (Appendix B.10) is $77.36, $0.91 higher than the unweighted estimate. The standard error of the weighted estimate is $3.73.

    Google Scholar 

  36. The z-statistics for the five change-in-density parameters estimated by the model are 11.56, 2.17, 1.42, 2.06, and 2.17, respectively.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Carson, R.T., Conaway, M.B., Hanemann, W.M., Krosnick, J.A., Mitchell, R.C., Presser, S. (2004). Analysis of Choice Questions. In: Valuing Oil Spill Prevention. The Economics of Non-Market Goods and Resources, vol 5. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-2864-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4020-2864-9_6

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-017-3636-7

  • Online ISBN: 978-1-4020-2864-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics