, Volume 11, Issue 6, pp 394-402
Date: 03 Aug 2005

Measuring Preferences on Environmental Damages in LCIA. Part 1: Cognitive Limits in Panel Surveys (9 pp)

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access



Part 1: Cognitive Limits in Panel Surveys · Part 2: The Question Format in Panel Surveys This series of two papers discusses the elicitation of weights for damage categories in LCIA with the aid of panel surveys. The papers focus especially on methodological aspects in panel surveys. Part 1 discusses potential cognitive limits of the panel members to understand the reference that is weighted. Part 2 focuses on the influence of the question format and compares results from two different weighting tasks: discrete choice (between alternatives) and score allocation.

Goal, Scope and Background

The weighting of environmental impacts and damages on the safeguard subjects Human Health, Ecosystems, and Resources is a significant step of full aggregated LCIA. Panel surveys have become a common approach in LCIA research to investigate the preferences of stakeholders on environmental impacts and damages. Despite the numerous studies, the knowledge on how to elicit reliable weights is still poor and inconsistent. We present a questionnaire study with 58 environmental science students to investigate so-called framing effects in panel surveys.

Main Features

The study investigates the significance of different framings, which were provided by three references. In addition, the significance of quantitative information provided in the questionnaire is tested. The references are (a1) safeguard subjects without specified additional information, (a2) damages in Europe as they are perceived by the panelist, and (a3) quantified scenarios derived from Eco-indicator99. All participants ranked and rated the importance of the safeguard subjects three times, once within each reference system. According to a test-of-scope study, quantitative information given to the panelist was varied. One level (b1) included data from the Ecoindicator99 methodology, whereas the other group (b2) received data with significantly higher Human Health damages and lower Ecosystem damages, ceteris paribus. This design allows testing the influence of quantitative data on the rating.


The weighting of the safeguard subjects (a1) reveals that Human Health is considered a slightly more important safeguard subject than Ecosystems. However, both are judged to be significantly more important than Resources. This picture changes for the references (a2) and (a3) where damages were weighted. For both references, the respondents rated damages to Ecosystems as most important followed by Resources and Human Health, showing by far the lowest weights. Therefore, the framing of the reference that was weighted played a significant role. The ratings of the subgroups (b1) and (b2) did not differ with respect to the importance of damages, though substantially different quantitative information was given.

Conclusion and Outlook

The participants of the study were obviously insensitive with respect to quantitative information provided. This raises three questions, which are discussed. What is the mental model upon which respondents base their beliefs and values? Can we expect that 'more sophisticated' subjects would respond differently? Which prerequisites should an empirical weighting procedure fulfill in order to incorporate numerical data? We propose different approaches for future procedures in order to accurately analyze these questions.