Abstract
This chapter describes how to collect primary and secondary data for nonmarket valuation studies. The bulk of this chapter offers guidance on how to design and implement a high-quality nonmarket valuation survey. Understanding the full data collection process will also help in evaluating the quality of data that someone else collected (i.e., secondary data). As there are not standard operating procedures for collecting nonmarket valuation, this chapter highlights the issues that should be considered in each step of the data collection process from sampling to questionnaire design to administering the questionnaire. While high-quality data that reflect individual preferences will not ensure the reliability or validity of estimated nonmarket values, quality data are a prerequisite for reliable and valid nonmarket measures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Post hoc adjustments are weights applied to the data so that the sample has a distribution on key variables (usually demographic variables) that is similar to the population of interest. Groves et al. (2009) described approaches for weighting data. Post hoc adjustments are also made to data collected using probability sampling to correct for nonresponse error.
- 2.
The study was a split sample test of different decision rules in a contingent-valuation question. Because we did not need to draw inference to a larger population about the magnitude of the estimated contingent values, a nonprobability sample was used.
References
Baker, R., Blumberg, S. J., Brick, J. M., Couper, M. P., Courtright, M., Dennis, J. M., Dillman, D., Frankel, M. R., Garland, P., Groves, R. M., Kennedy, C., Krosnick, J., Lavrakas, P. J., Lee, S., Link, M., Piekarski, L., Rao, K., Thomas, R. K. & Zahs, D. (2010). AAPOR report on online panels. Public Opinion Quarterly, 74, 711-781.
Clark, J. & Friesen, L. (2008). The causes of order effects in contingent valuation surveys: An experimental investigation. Journal of Environmental Economics and Management, 56, 195-206.
Cukier, K. N. & Mayer-Schoenberger, V. (2013). The rise of big data: How it’s changing the way we think about the world. Foreign Affairs, 92, 28-40.
Dalecki, M. G., Whitehead, J. C. & Blomquist, G. C. (1993). Sample non-response bias and aggregate benefits in contingent valuation: An examination of early, late and non-respondents. Journal of Environmental Management, 38, 133-143.
Dillman, D. A., Smyth, J. D. & Christian, L. M. (2009). Internet, mail and mixed-mode surveys: The tailored design method (3rd ed.). New York: John Wiley & Sons.
Grandjean, B. D., Nelson, N. M. & Taylor, P. A. (2009). Comparing an Internet panel survey to mail and phone surveys on willingness to pay for environmental quality: A national mode test. Paper presented at the 64th annual conference of The American Association for Public Opinion Research, May 14-17. Hollywood, FL.
Greenbaum, T. L. (2000). Moderating focus groups: A practical guide for group facilitation. Thousand Oaks, CA: Sage Publications.
Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70, 646-675.
Groves, R. M. & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72, 167-189.
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E. & Tourangeau, R. (2009). Survey methodology (2nd ed.). New York: John Wiley & Sons.
Lindhjem, H. & Navrud, S. (2011). Using internet in stated preference surveys: A review and comparison of survey modes. International Review of Environmental and Resource Economics, 5, 309-351.
Loomis, J. B. (2000). Vertically summing public good demand curves: An empirical comparison of economic versus political jurisdictions. Land Economics, 76, 312-321.
Loomis, J. B. & Rosenberger, R. S. (2006). Reducing barriers in future benefit transfers: Needed improvements in primary study design and reporting. Ecological Economics, 60, 343-350.
Marsden, P. V. & Wright, J. D. (Eds.). (2010). Handbook of survey research (2nd ed.). Bingley, United Kingdom: Emerald Group.
Mattsson, L. & Li, C. Z. (1994). Sample nonresponse in a mail contingent valuation survey: An empirical test of the effect on value inference. Journal of Leisure Research, 26, 182-188.
Medway, R. L. & Fulton, J. (2012). When more gets you less: A meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly, 76, 733-746.
Mitchell, R. C. & Carson, R. T. (1989). Using surveys to value public goods: The contingent valuation method. Washington, DC: Resources for the Future.
Olson, K., Smyth, J. D., & Wood, H. M. (2012). Does giving people their preferred survey mode actually increase survey participation rates? An experimental examination. Public Opinion Quarterly, 76, 611-635.
Powe, N. A. & Bateman, I. J. (2003). Ordering effects in nested ‘top-down’ and ‘bottom-up’ contingent valuation designs. Ecological Economics, 45, 255-270.
Schwarz, N. (1990). Assessing frequency reports of mundane behaviors: Contributions of cognitive psychology to questionnaire construction. In C. Hendrick & M. S. Clark (Eds.), Research methods in personality and social psychology (pp. 98-119). Beverly Hills, CA: Sage.
Singer, E. & Presser, S. (Eds.). (1989). Survey research methods: A reader. Chicago: University of Chicago Press.
Smith, V. K. (1993). Nonmarket valuation of environmental resources: An interpretive appraisal. Land Economics, 69, 1-26.
Smith, T. W. (Ed.). (2015). Standard definitions: Final dispositions of case codes and outcome rates for surveys (8th ed.). American Association for Public Opinion Research. Oakbrook Terrace, IL: AAPOR. www.aapor.org/AAPORKentico/AAPOR_Main/media/publications/Standard-Definitions2015_8theditionwithchanges_April2015_logo.pdf.
Sudman, S. & Bradburn, N. M. (1982). Asking questions. San Francisco: Jossey-Bass.
Sutherland, R. J. & Walsh, R. G. (1985). Effect of distance on the preservation value of water quality. Land Economics, 61, 281-291.
Taylor, P. A., Nelson, N. M., Grandjean, B. D., Anatchkova, B. & Aadland, D. (2009). Mode effects and other potential biases in panel-based internet surveys: Final report. Wyoming Survey & Analysis Center. WYSAC Technical Report No. SRC-905. Laramie: University of Wyoming. http://yosemite.epa.gov/ee/epa/eerm.nsf/Author/A62D95F235503D03852575A800674D75.
Tourangeau, R., Conrad, F. G. & Couper, M. P. (2013). The science of web surveys. New York: Oxford University Press.
U.S. Office of Management and Budget. (2006). Standards and guidelines for statistical surveys. Washington, DC: OMB.
Weisberg, H. F., Krosnick, J. A. & Bowen, B. D. (1996). An introduction to survey research, polling, and data analysis (3rd ed.). Thousand Oaks, CA: Sage Publications.
Whitehead, J. C. (1991). Environmental interest group behavior and self-selection bias in contingent valuation mail surveys. Growth and Change, 22, 10-21.
Whitehead, J. C., Groothuis, P. A. & Blomquist, G. C. (1993). Testing for non-response and sample selection bias in contingent valuation. Economics Letters, 41, 215-220.
Whitehead, J. C., Groothuis, P. A., Hoban, T. J. & Clifford, W. B. (1994). Sample bias in contingent valuation: A comparison of the correction methods. Leisure Sciences, 16, 249-258.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media B.V. (outside the USA)
About this chapter
Cite this chapter
Champ, P.A. (2017). Collecting Nonmarket Valuation Data. In: Champ, P., Boyle, K., Brown, T. (eds) A Primer on Nonmarket Valuation. The Economics of Non-Market Goods and Resources, vol 13. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7104-8_3
Download citation
DOI: https://doi.org/10.1007/978-94-007-7104-8_3
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-7103-1
Online ISBN: 978-94-007-7104-8
eBook Packages: Economics and FinanceEconomics and Finance (R0)