Advertisement

Environmental and Resource Economics

, Volume 53, Issue 3, pp 389–407 | Cite as

Test–Retest Reliability of Choice Experiments in Environmental Valuation

  • Ulf Liebe
  • Jürgen Meyerhoff
  • Volkmar Hartje
Open Access
Article

Abstract

The paper presents the results of the first test–retest study on choice experiments in environmental valuation. In a survey concerning landscape externalities of onshore wind power in central Germany, respondents answered the same five choice sets at two different points in time. Each choice set comprised three alternatives described by five attributes, and the time interval between the test and the retest was eleven months. The analysis takes place at three different levels, investigating choice consistency at the choice task level and repeatability of the latent construct utility at the level of parametric models as well as at the level of willingness-to-pay estimates. At the choice task level we observed 59 % identical choices. The parametric analysis shows that the test and retest estimates are not equal, even when we control for scale, that is, differences in the error variance. However, comparing the marginal willingness-to-pay estimates among test and retest reveals only a statistically significant difference for one of the attributes. Overall, this indicates a moderate test–retest reliability taking into account that consistency at the choice task level overlooks the stochastic nature of the process underlying discrete choice experiments.

Keywords

Choice experiment Environmental valuation Test–retest reliability Wind power 

JEL Classification

C8 Q0 Q5 

Notes

Acknowledgments

We are especially grateful to a reviewer who drew our attention to crucial issues regarding the definition and measurement of test–retest reliability of choice experiments. Also, we would like to acknowledge the comments made by Riccardo Scarpa (Associate Editor) and Wojtek Przepiorka. Finally, we would like to thank Christian Vossler for valuable suggestions made as a discussant of a previous version of this paper at the 4th World Congress of Environmental and Resource Economics 2010 in Montreal, Canada. Funding for this research, which was part of the project ‘Strategies for sustainable land use in the context of wind power generation’ (Fkz. 01UN0601A, B), was provided by the Federal Ministry of Education and Research in Germany.

Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

References

  1. 1.
    Ary D, Jacobs LC, Sorensen C, Razavieh A (2009) Introduction to research in education, 8th edn. Wadsworth, BelmontGoogle Scholar
  2. 2.
    Bierlaire M (2003) BIOGEME: a free package for the estimation of discrete choice models, presented at the 3rd Swiss transportation research conference, AsconaGoogle Scholar
  3. 3.
    Bliem M, Getzner M, Rodiga-Laßnig P (2012) Temporal stability of individual preferences for river restoration in Austria using a choice experiment. J Environ Manag 103: 65–73CrossRefGoogle Scholar
  4. 4.
    Bowker AH (1948) A test of symmetry in contingency tables. J Am Stat Assoc 43(244): 572–574CrossRefGoogle Scholar
  5. 5.
    Boxall P, Adamowicz WL, Moon A (2009) Complexity in choice experiments: choice of the status quo alternative and implications for welfare measurement. Aust J Agric Resour Econ 53(4): 503–519CrossRefGoogle Scholar
  6. 6.
    Breffle WS, Rowe RD (2002) Comparing choice question formats for evaluating natural resource tradeoffs. Land Econ 78: 298–314CrossRefGoogle Scholar
  7. 7.
    Brouwer R, Bateman IJ (2005) Temporal stability and transferability of willingness to pay for flood control, and wetland conservation. Water Resour Res 41(3): 1–6CrossRefGoogle Scholar
  8. 8.
    Bryan S, Gold L, Sheldon R, Buxton M (2000) Preference measurement using conjoint methods: an empirical investigation of reliability. Health Econ 9(5): 385–395CrossRefGoogle Scholar
  9. 9.
    Campbell D, Hutchinson WG, Scarpa R (2008) Incorporating discontinuous preferences into the analysis of discrete choice experiments. Environ Resour Econ 41(3): 401–417CrossRefGoogle Scholar
  10. 10.
    Carlsson F, Mørkbak MR, Olsen SB (2012) The first time is the hardest: a test of ordering effects in choice experiments. J Choice Model (forthcoming), GothenburgGoogle Scholar
  11. 11.
    Christie M, Gibbons J (2011) The effect of individual ‘ability to choose’ (scale heterogeneity) on the valuation of environmental goods. Ecol Econ 70: 2250–2257CrossRefGoogle Scholar
  12. 12.
    DeShazo JR, Fermo G (2002) Designing choice sets for stated preference methods: the effect of complexity on choice consistency. J Environ Econ Manag 44(1): 123–143CrossRefGoogle Scholar
  13. 13.
    Ferrini S, Scarpa R (2007) Designs with a priori information for nonmarket valuation with choice experiments: a monte carlo study. J Environ Econ Manag 53(3): 342–363CrossRefGoogle Scholar
  14. 14.
    Fiebig D, Keane M, Louviere J, Wasi N (2010) The generalized multinomial logit: accounting for scale and coefficient heterogeneity. Mark Sci 29(3): 393–421CrossRefGoogle Scholar
  15. 15.
    Guttman L (1945) A basis for analyzing test–retest reliability. Psychometrika 10(4): 255–282CrossRefGoogle Scholar
  16. 16.
    Hensher DA, Rose JM, Greene WH (2005) The implications of willingness to pay of respondents ignoring specific attributes. Transportation 32(3): 203–222CrossRefGoogle Scholar
  17. 17.
    Hensher DA, Jones S, Greene WH (2007) An error component logit analysis of corporate bankruptcy and insolvency risk in Autralia. Econ Rec 83(260): 86–103CrossRefGoogle Scholar
  18. 18.
    Hess S, Rose JM (2012) Can scale coefficient heterogeneity be separated in random coefficient models? Transportation (online 1. April 2012)Google Scholar
  19. 19.
    Holmes T, Boyle KJ (2005) Learning and context-dependence in sequential, attribute-based, stated-preference valuation questions. Land Econ 81(1): 114–126Google Scholar
  20. 20.
    Johnson FR, Kanninen B, Bingham M, Özdemir S (2007) Experimental design for stated choice. In: Kanninen B (ed) Valuing environmental amenities using stated choice studies. Springer, Dordrecht, pp 159–202CrossRefGoogle Scholar
  21. 21.
    Jorgensen BS, Syme GJ, Smith KM, Bishop BJ (2004) Random error in willingness to pay measurement: a multiple indicators, latent variable approach to the reliability of contingent values. J Econ Psychol 25(1): 41–59CrossRefGoogle Scholar
  22. 22.
    Kaplan RM, Saccuzzo DP (2008) Psychological testing: principles, applications, and issues. Wadsworth, BelmontGoogle Scholar
  23. 23.
    Krinsky I, Robb AL (1986) On Approximating the statistical properties of elasticities. Rev Econ Stat 68(4): 715–719CrossRefGoogle Scholar
  24. 24.
    Landis JR, Koch GG (1977) The measurement of observer agreement for categorial data. Biometrics 33(1): 159–174CrossRefGoogle Scholar
  25. 25.
    Lusk JL, Norwood FB (2005) Effect of experimental design on choice-based conjoint valuation estimates. Am J Agric Econ 87(3): 771–785CrossRefGoogle Scholar
  26. 26.
    McConnell KE, Strand IE, Valdes S (1998) Testing temporal reliability and carry-over effect: the role of correlated responses in test–retest reliability studies. Environ Resour Econ 12(3): 357–374CrossRefGoogle Scholar
  27. 27.
    Meyerhoff J, Ohl C, Hartje V (2010) Landscape externalities from onshore wind power. Energy Policy 38(1): 82–92CrossRefGoogle Scholar
  28. 28.
    Olsen SB (2009) Choosing between internet and mail survey modes for choice experiment surveys considering non-market goods. Environ Resour Econ 44(4): 591–610CrossRefGoogle Scholar
  29. 29.
    Olsen SB, Lundhede T, Jacobsen J, Thorsen B (2011) Tough and easy choices: testing the influence of utility difference on stated certainty-in-choice in choice experiments. Environ Resour Econ 49(4): 491–510CrossRefGoogle Scholar
  30. 30.
    Poe GL, Giraud KL, Loomis JB (2005) Computational methods for measuring the difference of empirical distributions. Am J Agric Econ 87: 353–365CrossRefGoogle Scholar
  31. 31.
    Ryan M, Netten A, Skatun D, Smith P (2006) Using discrete choice experiments to estimate a preference-based measure of outcome—an application to social care for older people. J Health Econ 25(5): 927–944CrossRefGoogle Scholar
  32. 32.
    Scarpa R, Ferrini S, Willis K (2005) Performance of error component models for status-quo effects in choice experiments. In: Scarpa R, Alberini A (eds) Applications of simulation methods in environmental and resource economics. Springer, Dordrecht, pp 247–273CrossRefGoogle Scholar
  33. 33.
    Scarpa R, Campbell D, Hutchinson WG (2007a) Benefit estimates for landscape improvements: sequential Bayesian design and respondents’ rationality in a choice experiment. Land Econ 83(4): 617–634Google Scholar
  34. 34.
    Scarpa R, Willis K, Acutt M (2007b) Valuing externalities from water supply: status quo, choice complexity and individual random effects in panel kernel logit analysis of choice experiments. J Environ Plan Manag 50(4): 449–466CrossRefGoogle Scholar
  35. 35.
    Skjoldborg US, Lauridsen J, Junker P (2009) Reliability of the discrete choice experiment at the input and output level in patients with rheumatoid arthritis. Value Health 12(1): 153–158CrossRefGoogle Scholar
  36. 36.
    Swait J, Louviere J (1993) The role of the scale parameter in the estimation and comparison of multinomial logit models. J Mark Res 30(3): 305–314CrossRefGoogle Scholar
  37. 37.
    Yu CH (2005) Test–retest reliability. In: Kempf-Leonard K (ed) Encyclopedia of social measurement, vol 3 P–A. Elsevier, Amsterdam, pp 777–784CrossRefGoogle Scholar

Copyright information

© The Author(s) 2012

Authors and Affiliations

  1. 1.Department of Agricultural Economics and Rural DevelopmentGeorg-August-Universität GöttingenGöttingenGermany
  2. 2.Department of Rural SociologyUniversität KasselWitzenhausenGermany
  3. 3.Institute for Landscape and Environmental PlanningTechnische Universität BerlinBerlinGermany

Personalised recommendations