Challenges and risks when communicating comparative LCA results to management

While life cycle assessment (LCA) has enjoyed decades of increasing popularity as the analytic tool of choice when evaluating the systemic environmental consequences of products, materials, and industrial actions, it has yet to resolve a fundamental tension between its application as a descriptive scientific instrument and its potential as a prescriptive aid for decision-making and management. Partly, this is because standardized LCA methods as they have been codified in popular software packages and ISO guidance documents generate results that may encourage misconception, indecision, inaction, and irrelevance in comparative decision problems. Thus, LCA has failed to resolve the seemingly interminable comparative environmental questions such as “Paper or plastic?” with anything more definitive than “On the other hand....” (e.g., Muthu et al. 2009). For example, LCA data are often presented as single point values without indication of uncertainty or data quality. In these cases, statistically insignificant differences between alternatives may appear as meaningful, resulting in decision-maker overconfidence. Furthermore, sensitivity analyses and exploration of “What if ... ?” alternative scenarios are rare, obscuring opportunities for environmental improvement. Wherever communication of LCA results fails to speak to the specific motives of the study, risks of misinterpretation may result in perverse environmental outcomes. When confronted with environmental tradeoffs, LCA practitioners must navigate between the descriptive scientific function of “gathering facts” and the prescriptive need of “applying values” to decisions (Hertwich et al. 2000). Early representations of LCA were predicated on the dominant environmental risk paradigm at that time, which postulated two phases: (1) a descriptive risk characterization, for generating objective data on hazards, exposure, dose, and response, and (2) a prescriptive risk management, for comparative assessment of alternatives (National Research Council 1983). While this disjointed process has since been revised in favor of a more integrated approach to risk analysis (National Research Council 2009), LCA as codified by ISO standards remains rooted in an illusion of separation of objective and subjective phases. For example, inventory and characterization phases are represented as objective and descriptive, while normalization and weighting (i.e., interpretation) are represented as introducing values necessary for prescriptive management of environmental alternatives. For fear of subjectivity, current practice often truncates LCA analysis at the stage of characterization, while development of methodological advances in interpretation practices has amounted to little more than refining weight factors. Given that the LCA practitioner is rarely empowered with decision authority, LCA interpretation requires facility in communication of descriptive results in ways that make transparent to decision-makers the tradeoffs inherent in the decision. Sala et al. (2020) characterize the lack of guidance for interpretation as “alarming” and include data quality, spatial and temporal heterogeneity, normalization and weighting practices, treatment of uncertainty, and sensitivity to selection of functional units as critical issues. Especially with comparative LCAs, the tradeoffs inherent between alternatives rarely reveal a simple decision pathway (Freidberg 2015). To add, presenting the uncertainty involved in LCA results adds even more difficulty. Uncertainty analyses need to be carefully considered, as making them too simple for the decision-maker may leave out important information, which may create biases in the interpretation (Kandlikar et al. 2005). Performing a more sophisticated uncertainty analysis can also be detrimental, Communicated by Matthias Finkbeiner.


Introduction
While life cycle assessment (LCA) has enjoyed decades of increasing popularity as the analytic tool of choice when evaluating the systemic environmental consequences of products, materials, and industrial actions, it has yet to resolve a fundamental tension between its application as a descriptive scientific instrument and its potential as a prescriptive aid for decision-making and management. Partly, this is because standardized LCA methods as they have been codified in popular software packages and ISO guidance documents generate results that may encourage misconception, indecision, inaction, and irrelevance in comparative decision problems. Thus, LCA has failed to resolve the seemingly interminable comparative environmental questions such as "Paper or plastic?" with anything more definitive than "On the other hand…." (e.g., Muthu et al. 2009). For example, LCA data are often presented as single point values without indication of uncertainty or data quality. In these cases, statistically insignificant differences between alternatives may appear as meaningful, resulting in decision-maker overconfidence. Furthermore, sensitivity analyses and exploration of "What if … ?" alternative scenarios are rare, obscuring opportunities for environmental improvement. Wherever communication of LCA results fails to speak to the specific motives of the study, risks of misinterpretation may result in perverse environmental outcomes.
When confronted with environmental tradeoffs, LCA practitioners must navigate between the descriptive scientific function of "gathering facts" and the prescriptive need of "applying values" to decisions (Hertwich et al. 2000). Early representations of LCA were predicated on the dominant environmental risk paradigm at that time, which postulated two phases: (1) a descriptive risk characterization, for generating objective data on hazards, exposure, dose, and response, and (2) a prescriptive risk management, for comparative assessment of alternatives (National Research Council 1983). While this disjointed process has since been revised in favor of a more integrated approach to risk analysis (National Research Council 2009), LCA as codified by ISO standards remains rooted in an illusion of separation of objective and subjective phases. For example, inventory and characterization phases are represented as objective and descriptive, while normalization and weighting (i.e., interpretation) are represented as introducing values necessary for prescriptive management of environmental alternatives. For fear of subjectivity, current practice often truncates LCA analysis at the stage of characterization, while development of methodological advances in interpretation practices has amounted to little more than refining weight factors.
Given that the LCA practitioner is rarely empowered with decision authority, LCA interpretation requires facility in communication of descriptive results in ways that make transparent to decision-makers the tradeoffs inherent in the decision. Sala et al. (2020) characterize the lack of guidance for interpretation as "alarming" and include data quality, spatial and temporal heterogeneity, normalization and weighting practices, treatment of uncertainty, and sensitivity to selection of functional units as critical issues. Especially with comparative LCAs, the tradeoffs inherent between alternatives rarely reveal a simple decision pathway (Freidberg 2015). To add, presenting the uncertainty involved in LCA results adds even more difficulty. Uncertainty analyses need to be carefully considered, as making them too simple for the decision-maker may leave out important information, which may create biases in the interpretation (Kandlikar et al. 2005). Performing a more sophisticated uncertainty analysis can also be detrimental, Communicated by Matthias Finkbeiner. as it may be too difficult for the user to interpret, which will also misguide them in their decision-making process. While qualitative explanations can support statistical results, in the field of LCA, there are no standards for sense-making when confronted with conflict or ambiguity. Focusing on single impact categories such as climate change (now known as carbon tunnel vision) simplifies interpretation and improves decision confidence, but only at the expense of neglecting the broad, systemic information that motivates LCA in the first place. Given these challenges, it is reasonable to conclude that there is a need for further attention to identifying and improving overall communication practices of comparative LCA results to inform holistic environmental management decisions.
This article identifies the risks of misinterpretation in communication of LCA results under various conditions of uncertainty and makes some recommendations to guard against these. Because data visualization can guide decision intuitions, this study uses the most popular graphical representation of LCA results -bar chart plots. We illustrate the conditions under which this typical visualization may misguide decision-makers and caution readers to examine discernibility as distinct from relative difference.

Discernibility vs relative differences
Typical comparative LCAs will exhibit quantities and their relative differences. Here, two quantities may exhibit a relative difference deemed large enough to signal an important distinction in environmental consequences (for some, this may be 20% and others 50%). However, without statistical context, seemingly large differences may prove to be insignificant. We use the term discernibility to describe the difference between two quantities in the context of statistical uncertainty, and suggest that discernibility is an improvement on relative difference for interpretation of comparative LCA results Heijungs and Kleijn 2001;Mendoza Beltran et al. 2018). For example, when comparing the results of Monte Carlo exploration of uncertainty, two alternatives might be characterized as discernible when outcomes such as A > B persist in at least 95% of the trials -regardless of the percentage difference illustrated in a bar chart. Where uncertainty is low (e.g., climate change), a relative difference of 5% may be discernible, whereas difference of 30% or more may not be discernible where uncertainty is high (e.g., water scarcity).
The advantage of the discernibility approach is that it avoids amplifying relative differences that are statistically indistinguishable. The disadvantage is that it is more computationally intensive and more difficult for practitioners to communicate to decision-makers.

Four risks in comparative LCA communication
When considered together, relative difference and discernibility can lead to four different types of LCA communication risks, depending on the magnitude of each. Figure 1 maps the pitfalls of the communication landscape. The horizontal axis represents the magnitude of the relative difference, which can be thought of as the difference between scaled bar charts in typical LCA software results. Note that this study refers to small and large relative differences as they may be perceived by the decisionmaker. As such, no objective threshold between small and large exists. The vertical axis represents the statistical discernibility, as determined from uncertainty analysis Heijungs and Frischknecht 2005), and is only revealed to LCA practitioners able to investigate uncertainty beyond the typical practices as codified in commercial software packages. Because discernibility and relative difference are independent of one another, depending on the uncertainty context, large differences in mean performances (far right in the x-axis) may not correlate with discernible results (high end in y-axis).
Each quadrant of Fig. 1

represents a communication risk:
Indecision (bottom left quadrant) When the relative difference between two bar charts is small, decision-makers are unlikely to focus attention on a choice -especially when statistical analysis reveals that the two alternatives are indiscernible. In this case, the quick, easy-to-interpret relative difference approach is reliable to the extent that it correlates with the more intensive, challenging discernibility approach. The communication risk here is indecision. Without a salient environmental difference between alternatives, the practitioner may be at loss to make a recommendation, given that other considerations outside the scope of LCA -such as costs, public perception, or feasibility -may govern the decision.

Misconception (bottom right quadrant)
A more serious situation occurs when the relative difference is large while discernibility is low. Although differences in mean performance might appear so large that decision-makers perceive an opportunity to reduce environmental impacts, where high levels of uncertainty cloud the analysis, there is a danger of misconception. This result can lead to ineffective or perverse outcomes, based on overconfidence in misleading results. For organizations, this means interventions may have low efficacy in reducing impacts and public messaging may be overestimating claims. Adding discernibility analysis can remedy this misconception risk.
Irrelevance (top left quadrant) When mean differences are small, but statistical analysis reveals discernible results, only the most sophisticated decision-makers may appreciate the potential of the choice at hand. Discernibility 1 3 does not capture the value decision-makers may place on a statistically significant difference (Mendoza Beltran et al. 2018). That is, discernibility can identify those quantities that represent reliable tradeoffs, but not whether those tradeoffs are relevant to decision-maker objectives. Therefore, when faced with negligible differences that are discernible, decision-makers may either consider options to be equivalent or overestimate the importance of a comparative claim, just because it is discernible. Additional context, such as a planetary boundaries approach, may help resolve this issue by translating potential improvements into extrinsic measures of impact at global or regional scales (Bjørn and Hauschild 2015). However, scaling planetary thresholds to the scale of any particular functional unit remains a challenge. One strategy that may help in these situations is expressing impacts in terms of analogous functional units, or yardsticks. For example, translating carbon equivalencies into a functional unit familiar to decision-makers, such as "that's the same impact as X many vehicle-miles traveled," may help communicate an unfamiliar measure into more relatable units. For example, the EPA provides common equivalency factors for the climate change impact category (EPA 2020). One downside of this practice to be aware of is that these analogous functional units (miles driven, plastic bottles, trees removed) may elicit an emotional response that can affect magnitude perception. While an emotional response may make insights more engaging, it may also distort the interpretation of results. Another strategy when discernible differences are small at the scale of a single functional unit is to evaluate the magnitude of difference at a larger scale. For example, Prado et al. (2021a, b) compare the carbon footprint of different vegetable frying oils. While the functional unit is 1 kg of oil, where extrinsic differences appear small, scaling up the discernible differences to encompass the whole operation of a major fast food chain makes the environmental opportunities more salient.

Inaction (top right quadrant)
When both relative differences and discernibility are high, decision confidence should also be high. In this case, the intuitive interpretation of big differences on the bar chart coincides with the discernibility analysis and there are indeed promising avenues of impact. Still, obstacles to the decision may remain because there may still be tradeoffs that managers are left to confront unaided  (Cinelli et al. 2014;Cucurachi et al. 2017;Prado et al. 2012). Without clarity of action, decision-makers risk lapsing into inaction. To circumvent this problem, several mechanisms exist that attempt to make the information more manageable, such as external normalization references (Van Hoof et al. 2013), and single scores derived from a weighted sum (Laurent et al. 2011;Lautier et al. 2010). Each of these carry their own set of risks (Pollesch and Dale 2016;Prado-Lopez et al. 2014;Prado et al. 2019;Rogers and Seager 2009). In practice, when faced with tradeoffs, comparative LCAs are dominated by carbon impacts -thus falling into a carbon tunnel vision that is prevalent in environmental management today. For example, the US renewable fuel standard is entirely based on life cycle climate change performance. The policy leaves out other key criteria, such as eutrophication, that have led to increases in other impact categories, such as the dead zone of the Gulf of Mexico. These unintended outcomes can conflict with other policy targets (Costello et al. 2009;Miller et al. 2007). Another example of this may be in reconciling climate change and water scarcity related to fracking for fossil fuel extraction. Where decision-makers lack the ability to deal with and balance different environmental criteria, conflicting policies may impede progress. We identify the risk of inaction in the face of definitive results as the highest priority for LCA interpretation and communication practices, and we recommend applying decision analysis algorithms that consider mutual differences, limit compensation between criteria, and incorporate uncertainties to rank alternatives and facilitate deliberative decision-making (Cinelli et al. 2014;Pollesch and Dale 2015;Prado et al. 2017). For example, stochastic multi attribute analysis (SMAA) can be an alternative to navigate tradeoffs for high-stake decision-making in comparative LCAs (Prado et al. 2021a, b;Prado and Heijungs 2018). SMAA can be applied at the midpoint level and considers uncertainty in the characterized performances and includes a stochastic approach to weights that contrasts multiple perspectives.
Although the whole premise of LCA is to enable holistic systemic thinking, this advantage can be lost when high-stake decisions fail to explore tradeoffs. Given that environmental context is always complex, policymakers can struggle with making any decision -even when tradeoffs are clear. The temptation is to oversimplify decision problems by redrawing the boundaries of analysis, focus attention on fewer categories they believe should be prioritized, and abstract away burdens in other categories.

Critique and recommendations
Although the need for LCA communication has never been greater, current practices in mainstream LCA practice fall short in their effectiveness to inform decisions. While official ISO guidance has been instrumental in maintaining framework consistency, it is far less effective for facing tradeoffs. For example, according to ISO, weighting is not permitted for comparative assertions, to avoid superimposing the views of a particular practitioner in ways that would skew interpretation (Cucurachi et al. 2017;Laurin et al. 2016;Prado et al. 2012). Nonetheless, subsequent critique of popular LCA practices and standards has revealed that analysis cannot proceed in the absence of values, and that in the quest for objectivity, certain values are rarely made explicit (Cinelli et al. 2014;Hertwich and Hammitt 2001). What really happens is that organizations, in an effort to make sense of tradeoffs, apply their own interpretation analysis outside of the critical review process -thus risking misinterpretation.
It is known that one of the obstacles to the broader use of LCA is the difficulties in the understanding and communication of results (Laurin et al. 2016). Often the LCA results are not comprehensible for stakeholders such as policy and decision makers, although previous research demonstrates that the integration of life cycle aspects in the design process can improve decision-making involving non-experts (Hollberg et al. 2021).
Certainly, LCA practitioners recognize that the Goal & Scoping phase of any LCA is motivated by more than mere scientific curiosity. The selection of problems and boundaries is, itself, an expression of values about what should be amplified and what should be abstracted away from the scientific model. While the prescriptive elements of standard LCA methods have been obscured by an emphasis placed on reproducibility of results, the opposite problem also exists -i.e., oversimplification of scoping so as to introduce bias in descriptive stages of analysis that will result in foregone prescriptive conclusions.
The principal advantage of LCA is that it helps avoid the surprising tradeoffs that are not revealed by less comprehensive approaches. To the extent that the interpretation phase of LCA lacks reliable visualization and communication standards, the risk of misconception can be exploited to abuse inherent decision-maker biases.
Decades ago, as the LCA was working to establish repeatable and reliable methods to quantify environmental impact, interpretation phases were not a high priority. Only later did limitations related to decision-making contexts receive increasing recognition (Cinelli et al. 2014;Cucurachi et al. 2017;Gaudreault et al. 2009;Laurin et al. 2016;Myllyviita et al. 2014;Pollesch and Dale 2015;Sala et al. 2020). Now, the communicative shortcomings of current practices, despite the availability of improved tools for supporting decision-maker (Prado et al. 2019), has become self-evident.
It is time for LCA practitioners to become more conscious about the bridge between science-based information and management so that insights gained from LCA can be applied effectively in the decisions of any organization. The incorporation of decision analysis tools for the interpretation of comparative LCA results can help practitioners provide recommendations for action by allowing consideration of uncertainty, tradeoffs, and values when stakes are high. As LCA practitioners, we must learn to navigate this space because in many instances, more information and more refined data will not improve communication. Without awareness of the disconnect between gathering facts and applying values, and a commitment to improve communicative practices, interpretation of LCA results by managers will remain at risk of irrelevance, inaction, indecision, and misconception.
Funding Open Access funding provided by Colombia Consortium.

Conflict of interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.