Communicating future climate projections of precipitation change

Understanding how precipitation may change in the future is important for guiding climate change adaptation. Climate models are the primary tools for providing information on future precipitation change, though communicating and interpreting results of different model simulations is challenging. Using an online survey, completed by producers and users of climate model information, we compare and evaluate interpretations of different approaches used to summarise and visualise future climate projections. Results reveal large differences in interpretations of precipitation change arising from choices made in summarising and visualising the data. Respondents interpret significantly smaller ranges of future precipitation change when provided with the multi-model ensemble mean or percentile information, which are commonly used to summarise climate model projections, compared to information about the full ensemble. The ensemble mean is found to be particularly misleading, even when used with information to show model agreement in the sign of change. We conclude that these approaches can lead to distorted interpretations which may impact on adaptation policy and decision-making. To help improve the interpretation and use of climate projections in decision-making, regular testing of visualisations and sustained engagement with target audiences is required to determine the most effective and appropriate visualisation approaches.


Introduction
How should we summarise and communicate the results of multi-model climate projections? Choices made in communicating climate projections to different audiences can impact the accessibility, relevance and scientific integrity of the information. While a range of approaches are used in practice, evidence of how different audiences interpret visualisations of climate projections remains limited and represents a key gap in our knowledge to improve the translation of climate science into services (Spiegelhalter et al. 2011;Christel et al. 2018).
Climate modelling centres across the world are producing a growing volume of climate projection data from global climate models (GCMs) (e.g. fifth phase of the Coupled Model Intercomparison Project (CMIP5), Taylor et al. (2012)), regional climate models (e.g. Giorgi et al. 2009) and statistical downscaling methods (Hewitson et al. 2014). The Intergovernmental Panel on Climate Change (IPCC) has advanced methods to combine and communicate multimodel climate projections at global and regional scales (Knutti et al. 2010). The working group 1 summary for policymakers of the IPCC fifth assessment report (AR5) combines results of over 30 GCMs into a single figure summarising future temperature and precipitation changes for the late twenty-first century (figure SPM.7, IPCC 2013a). However, Harold et al. (2016) warn that these approaches may be inaccessible to non-experts. McMahon et al. (2015) gathered interview data with non-technical users to show such information is frequently misinterpreted, with people falsely attributing too much of the uncertainty to climate models and ignoring scenario uncertainty. Other regional and national initiatives use a range of approaches to communicate climate model projections, such as probability distributions and percentile information (e.g. UKCP18 1 ; EUCP 2 ; Villafuerte et al. 2019), as well as scenarios and storylines (e.g. Climate Change in Australia 3 ; Jack et al. 2020). Understanding the effectiveness of different methods used to communicate multi-model climate projections is critical to ensure their appropriate use in different contexts.
In this paper, we evaluate approaches commonly used to summarise and communicate multi-model projections of future precipitation change. In Subsections 1.1 and 1.2, we discuss previous studies investigating the communication and visualisation of climate information and present insights from risk communication in other disciplines. In Subsection 1.3, we detail the research questions addressed in this study.

Choices in spatial mapping of precipitation projections
Future climate projections of precipitation change can be communicated as absolute (e.g. mm/ day) or relative (e.g. percentage) changes. The IPCC AR5 summary for policymakers (IPCC 2013a) and Regional Atlas (IPCC 2013b) use relative changes to convey projections across different regions. However, in other risk communication fields, there is often a preference for information to be communicated using absolute changes (e.g. Fortin et al. 2001;Kause et al. 2020). It has been consistently found that information presented in terms of relative risk elicits greater willingness to act when relative changes are high but absolute changes are small (see Visschers et al. 2009).
When presenting spatial data from climate projections, information producers may display data using gridboxes (at the resolution of observation or model datasets) or smoothed data using contours. Stephens et al. (2012) warn that some users may perceive contoured information to have a greater spatial resolution, impacting on understanding and the perceived scientific credibility of the information. Understanding whether the use of contours versus gridboxes affects interpretation is therefore an important topic to address.

Communicating climate model uncertainty
Climate projections are inherently uncertain. Comparing multiple climate models captures some of this uncertainty, often expressed through the distribution of values produced and the level of agreement across model simulations. Guidance is available to support the visualisation and communication of uncertainty in climate projections (e.g. Kaye et al. 2012), though there has been limited empirical testing of interpretations using different approaches.
Recent studies have examined responses to different ways of communicating uncertainty ranges in climate projections. For example, using online surveys, Daron et al. (2015a) show that different ways of visualising future temperature projections affect assessments of confidence in the projections, and Lorenz et al. (2015) reveal differences in comprehension and preferences for visualisations amongst local adaptation practitioners. However, these studies focussed on displays of temporal data rather than maps showing spatial information which are preferred for many applications (Taylor et al. 2015). Using maps of future temperature projections, Retchless and Brewer (2016) assess interpretations of different approaches to visualising uncertainty, exploring the effectiveness of bivariate maps and different colour palettes to communicate the magnitude and statistical significance of changes. However, spatial patterns of precipitation change are typically less smooth than for temperature, and since projected precipitation changes often span increases and decreases (IPCC 2013a), it is difficult to use bivariate maps that apply shading to represent uncertainty on multi-model mean values close to zero, where values are typically coloured white using diverging colour palettes.
Hatching or stippling is often used to illustrate model agreement, to show areas of high agreement in drier or wetter conditions in the future, or to compare changes with past variability (e.g. IPCC 2013a). Yet the efficacy of these approaches in communication has not been empirically tested and thus requires investigation.

Research questions
This study aims to provide empirical evidence on how different ways of summarising and communicating multi-model climate projections impact on their interpretation. In doing so, we aim to help climate service providers make more informed choices about how to represent climate projections and uncertainty information in products and services, and aid users in understanding which approaches are more appropriate for their context.
Specific research questions addressed in this study are: 1. How does absolute versus relative changes in precipitation affect interpretation? 2. How does contoured verses gridded display of precipitation data affect interpretation and the perceived credibility of projections? 3. How do different ways of summarising and presenting multi-model projections of precipitation change affect interpretation?
Questions 1 and 2 target specific choices in the display of model projections of precipitation change. Question 3 is a broader question on the challenge of summarising and displaying multi-model information. We focus on assessing some common approaches, such as the use of three-panel summaries and ensemble percentile information. For all questions, we compare interpretations between produces and non-producers of model information. Section 2 outlines the methods, including information about respondents, survey structure and visualisations tested. Section 3 presents results addressing research questions 1 and 2, and Section 4 presents results addressing research question 3. Section 5 provides conclusions, caveats and recommendations to extend the research. Additional information, including the full set of survey responses (excluding personal information), is available as supplementary material.

Sample
Data was gathered using an online survey 4 targeted at both producers and advanced users of climate model projectionsi.e. people directly using climate model outputs (e.g. scientists, consultants, technical officials in government) rather than derived products and messagesas they play a critical role in the translation of climate projections within the climate services value chain. The study uses GCM data from CMIP5 simulations under the Representative Concentration Pathway RCP8.5. Visualisations focus on southern Africa as this region was the focus of a previous study where respondents expressed willingness to participate in follow-on research (Daron et al. 2015a), and it aligns with work in the Future Resilience of African CiTies And Lands (FRACTAL) project. 5 Between October 2017 and March 2018, 167 participants completed the survey. A further 213 people partially completed the survey, but only complete responses are included in the analysis presented. Participants were recruited through three channels: (1) emails to previous survey respondents who agreed to participate in follow-on research (Daron et al. 2015a); (2) advertisement in climate science and services mailing lists and newsletters (e.g. Climate Services Partnership newsletter); and (3) circulation amongst the authors' professional networks.
All respondents are 20 years of age or older, with 60% between the ages of 30 and 49. Thirty-nine per cent are resident in the UK, 14% in the USA, 13% in South Africa and the remaining 34% from another 38 countries. Fifty-nine per cent identify as male, 38% as female and 1% other. Respondents work at a range of institutions, with 37% at universities, 18% at other scientific organisations, 12% working in government and the remaining 33% from other organisations (e.g. NGOs, consultancies and private businesses). Levels of education are high with 85% of respondents holding a Masters or PhD. Ninety-six per cent of respondents are native or fluent in English, and 4% are intermediate level. Most respondents expressed high confidence in dealing with quantitative information; over half reported 17 or above (on a scale from 0, "not confident at all", to 20, "extremely confident") for their confidence in using probabilities.
The sample represents producers (52%) and non-producers (48%) of climate model information. Eighty-four per cent currently use information from weather or climate model simulations in their work, and 71% use information from climate model ensembles ( Figure A1, Annex 1, Supplementary Material).

Survey design and procedure
After requesting some background information, respondents were asked for their perspectives on the effectiveness of different ways to summarise and communicate climate model projections to policymakers (results in Section 4.1). They were then randomly assigned to one of four groups, hereafter groups A, B, C and D, with 39, 40, 38 and 50 respondents, respectively. The groups provide four samples of the wider population, enabling comparisons across different visualisations shown to each group.
Each respondent was shown a sequence of visualisations from four successive setssee Table 1. Each set contains four slightly different visualisations (see Section 2.3), one per group. Sixteen visualisations were therefore tested in total, though each respondent only viewed four visualisations, one from each set according to their assigned group. To limit carry-over effects (discussed in Section 5.2), a layered approach was usedi.e. information from the ensemble was introduced gradually as the survey progressed, and respondents did not see the full ensemble until the final set.
For each visualisation, respondents were first asked questions to assess objective understanding (results in Section 3.1) and then questions to assess subjective interpretation. Respondents were able to provide explanations for specific answers and overall feedback. Specific survey questions are provided in the results sections, and it is explained how they address the study research questions (Section 1.3). The full survey, including questions and answer options, is in Annex 2 of the Supplementary Material.

Visualisations
Here, we describe the visualisations tested in the study. All visualisations show projections in precipitation change by the mid-twenty-first century (2040 to 2060) compared to the latetwentieth century (1980 to 2000) under the RCP8.5 scenario. The same spatial domain is used throughout, with a key to help identify countries. The colour palette used for all visualisations (brown indicating drier and blue wetter) is often used for visualising precipitation projections; noting colour palettes can significantly affect interpretation (Hawkins 2015;Retchless and Brewer 2016). The captions given are the same as in the survey and exclude details of the RCP scenario and GCMs used.

Set 1: Output from a single climate model simulation
As outlined in Table 1

Set 2: Multi-model ensemble mean information
The second set of visualisations ( Fig. 2) includes four maps showing percentage precipitation changes. Data is taken from nine GCM simulations, chosen at random from a subset of CMIP5 data available to the authors through research in FRACTAL. At each gridbox, the average precipitation change of the ensemble members is given (i.e. ensemble mean). Figure 2a displays information using contours while Fig. 2b shows the same information using gridded data. Figure 2c is the same as Fig. 2b but includes stippling over gridboxes where more than 2/

Set 3: Three-panel visualisations showing the range and central tendency
Set 3 visualisations (Fig. 3) use the same data as set 2, with each visualisation containing three panels to represent the range and central tendency of the ensemble. Figure 3a shows the simulation with the largest decrease (driest), least change (middle) and largest increase (wettest) in precipitation averaged over all land points. Summarising the ensemble information in this way preserves the spatial patterns of individual simulations. Figure 3 b, c and d show data from gridbox scale calculations across the ensemble members; adjacent gridbox values may come from different simulations. Figure 3b shows the largest decrease (minimum), mean and largest increase (maximum) in precipitation at each gridbox. Figure 3c shows the 10th, 50th (median) and 90th percentiles of the ensemble for each gridbox (e.g. as used in UKCP18).   Figure 3d shows the 25th, 50th (median) and 75th percentiles (e.g. as used in the IPCC AR5 Regional Atlas -IPCC 2013b). The chosen variations in display and information content were selected to address research question 3.

Set 4: Full nine-member model ensemble
Set 4 visualisations (Fig. 4) display all nine simulations used to construct summary visualisations in sets 2 and 3. Panels in Fig. 4 a and c are in a randomised order while the panels in Fig. 4 b and d are ordered from the largest decrease (driest) in the top-left to the largest increase (wettest) in the bottom-right, averaged over land.

Significance testing
The 2-sample Student t-test is used to test significance in differences between group responses where results are continuous quantitative data. Differences between group responses for categorical data were tested for significance using the non-parametric Chi-squared test of independence. Chi-squared tests are only conducted for answer categories selected by respondents, since the test cannot be calculated where fields contain zero responses; this caveat is highlighted on results where applicable. Significance testing is also conducted to investigate differences in responses between producers (n = 87) and non-producers (n = 80) of weather or climate model simulations. We adopt an alpha level of 0.05 throughout, reporting findings that are statistically significant at the 95% confidence level. To control for the familywise error rate where multiple pairwise comparisons are made for a specific set of answers, a Bonferroni correction is applied to the alpha value. Cohen's d is used to report effect sizes where appropriate.

Qualitative data
Respondents were asked to provide comments on all visualisations. Additionally, for Figs. 2, 3 and 4, respondents had the option to provide further explanations on their answers and provide a written summary (aimed at non-experts) of the information contained in the visualisations. This data is analysed using an inductive content analysis approach to provide further insight to the quantitative data analysis.
3 Resultspart 1: interpreting absolute versus relative and gridded versus contoured precipitation data Section 3.1 provides responses to questions testing objective understanding of the visualisations in sets 1 and 2. Results in Section 3.2 address research question 1, on the impact of using absolute versus relative change in precipitation, and results in Section 3.3 address research question 2, on contoured versus gridded display of data. Full results are provided in Annex 3, Supplementary Material.

Objective understanding
For each visualisation in set 1 (single model output), the question "Which of these statements best describes the information?" was asked, with three possible answers given. Only 3 of the 167 respondents provided an incorrect answer. Ninety-two per cent of respondents also correctly answered a further question, "What is the value of rainfall change projected at the location marked X?", within ± 1 colour interval. Similarly, for the set 2 visualisations (ensemble mean information), respondents were asked to report the value of rainfall change at a marked location, with 97% of respondents answering correctly within ± 1 colour interval.
High levels of accuracy in these responses provide confidence that subsequent responses to questions targeting subjective interpretation are unlikely to be significantly influenced by misunderstanding the content of the visualisations.

Absolute versus relative changes in precipitation
Using the set 1 visualisations (single model output), respondents were asked, "What is the general direction of rainfall change projected for [a] Botswana, [b] Mozambique, and [c] Angola?" with four possible answers: drier, wetter, no change or mixed changessee Figure A2 in Annex 1, Supplementary material. For Botswana and Mozambique, interpretations are very similar across the four visualisations, with between 96 and 97.5% stating that Botswana is expected to get drier, and between 92 and 95% stating that Mozambique is expected to have mixed changes. For Angola, we find significant differences in responses depending on the visualisation viewed. For visualisations showing absolute data, the most selected response is "mixed changes", group A (45%) and B (56%). However, there is a significant shift (χ2(2) = 25.99, p < 0.001) 7 towards "no change" as the most selected response when data is presented as percentage changes, group C (71%) and D (72%). Displaying 7 Chi-squared test does not include "wetter" category which received zero responses. precipitation change as percentages versus absolute values therefore impacts on interpretations of how climate change is projected to affect a specific country.
Respondents were also asked, "Would you use this type of visualisation in your work?". On average 73% of those viewing absolute change information stated they would use the visualisation compared to 86% of those viewing percentage changes, showing a significantly higher preference for percentage change information (χ2(1) = 4.40, p = 0.036). Comments provided by some respondents also suggested potential difficulty in understanding and communicating absolute information expressed in mm/day. For example, one respondent commented "why rainfall per day? It doesn't rain every day".

Gridded versus contoured precipitation data
Two of the set 1 visualisations (single model output) used gridded data (groups A and C) and two used contoured data (groups B and D). We do not find any significant differences in objective understanding between the two methods (Section 3.1), nor in responses to the questions on the general direction of precipitation for specific countries (see Section 3.2). Finally, there is no significant difference in the percentage of people who would use gridded (81%) versus contoured (79%) visualisations in their work.
For set 2 (ensemble means), we find some differences in interpretation between visualisations using contours (groups A and D, with hatching showing model agreement for group D) versus gridboxes (groups B and C, with stippling showing model agreement for group C). Interpretation was first assessed by asking, "which country shows the least change in rainfall?". Mozambique is the most selected country for all groups (from 52% for group B to 58% for group D). However, the next most selected country differs depending on whether the visualisation uses contours or gridboxes: Angola (21% groups A and D, 33% groups B and C) and Zimbabwe (23% groups A and D, 9% groups B and C). The distribution in responses for the three countries which are selected differs significantly between visualisations using countours versus gridboxes (χ2(2) = 6.90, p = 0.03). 8 While subtle, the visualisations using gridded data show a slightly larger region of higher magnitude drying in Zimbabwe than when using contours, which may explain why less respondents in groups B and C selected Zimbabwe as the country showing the least change (see Figure A3, Annex 1, Supplementary Material).
Two further questions on set 2 asked which countries contained the greatest magnitude of drying and wetting. For drying, Namibia is the most selected country (82% group A, 58% group B, 66% group C, 54% group D), and South Africa is the next most selected country (10% group A, 43% group B, 26% group C, 46% group D). The visualisations for groups A and D use contoured data, but hatching is also included for group D (Fig. 2d) where more than 2/3 of models agree on the sign of change. This difference results in a significant shift in interpretations of the greatest magnitude of drying from Namibia to South Africa (χ2(1) = 11.83, p < 0.001). For wetting, South Africa is the most selected country (49% group A, 55% group B, 55% group C, 72% group D), and the second most selected countries are Angola (35% group B, 36% group C, 20% group D) and Mozambique (28% group A). While the second most selected country differed between groups A (contoured) and B (gridded), the effect is not found to be significant.
Overall, results suggest that showing data using contoured rather than gridded values can impact on interpretation, though the effects found are relatively small and often due to subtle changes in the display of information.
The survey also assessed respondents' perceptions of the scientific credibility and usefulness of the different visualisations. Table 2 summarises the responses to the following three questions used for set 2 visualisations: 1. "How scientifically credible do you think the information being displayed is?" 2. "How useful do you think this type of information is for a national government climate change policymaker?" 3. "Would you use this type of visualisation in your work?" While perceived scientific credibility is highest when data is contoured and includes hatching to show model agreement (group D), we do not find any statistically significant differences compared to other responses. There are also no significant differences in the perceived usefulness of the different visualisations or how many respondents would use the different visualisations in their work. Despite the lack of significant differences found in our results, qualitative data gathered provides some explanations for why some individuals believe that contoured data is more appropriate than gridded data for communication to policymakers in order to promote uptake of information. For example, one respondent commented "the pixelated map would be off-putting or distracting for decision-makers. Smoother contours would eliminate that problem even though it would come at the cost of some scientific integrity".   Table 3 shows the extent to which respondents agree with statements on the effectiveness of different ways to summarise climate model ensemble projections (statements 1 to 3), and the usefulness of different statistical techniques commonly used for summarising climate model uncertainties to policymakers (statements 4 to 6). Answers were provided using a sliding bar on a scale from 0 (strongly disagree) to 10 (strongly agree)see Annex 2, Supplementary Material. High standard deviations show large variations in answers, with the highest disagreement on the effectiveness of the multi-model mean as an approach to summarise projections. Overall respondents express that providing the range (minimum to maximum) is a significantly more effective way to summarise information for policymakers than using the multi-model mean (t(300) = 5.16, p < 0.0001, d = 0.59).

Perspectives on the effectiveness of different approaches
There are also large variations in responses regarding the usefulness of different techniques to communicate uncertainties. Very few people disagree that information about the significance of changes compared to past variability, or the proportion of models agreeing on the sign of change, is useful to policymakers. Indeed, these techniques (statements 5 and 6) are considered significantly more effective for communicating uncertainty than providing statistics on the spread of projections (statement 4) (t(310) = 5.46, p < 0.0001, d = 0.62 for statement 5; t(303) = 5.52, p < 0.0001, d = 0.63 for statement 6). Note that producers of climate model information consider the provision of statistics significantly more useful than non-producers (mean response of 6.17 compared to 5.27; t(149) = 2.28, p = 0.023, d = 0.38).

Perceived scientific credibility, usefulness and use
As presented in Section 3.3 for the set 2 visualisations (ensemble means), the survey also assessed perceptions of scientific credibility and usefulness as well as how many respondents would use the visualisations in their work for the set 3 (three panel summaries) and set 4 (full ensemble) visualisationssee Table 4.
For set 3 (three panel summaries), the group D visualisation showing 25th, 50th and 75th percentile information is considered significantly more credible than the group A visualisation Table 3 Responses to "Indicate how much you agree with the following statements" [statements 1 to 3], and "Indicate how useful you think the following techniques are for communicating future climate projection uncertainties to policymakers" [statements 4 to 6]

Statement
Mean Standard deviation 1) Providing the multi-model mean (i.e. average) from multiple climate model projections is an effective way to summarise the information for policymakers.

2.79
2) Providing the range (i.e. minimum to maximum) from multiple climate model projections is an effective way to summarise the information for policymakers.
6.83 1.96 3) Providing a percentile range (e.g. 10th to 90th percentiles) from multiple climate model projections is an effective way to summarise the information for policymakers. showing the driest, middle and wettest projection (t(71) = 2.69, p = 0.009, d = 0.62). The group D visualisation (as used in the IPCC AR5 Atlas, IPCC 2013b) also scores highest for usefulness and the percentage of people who would use it in their work, while the group A visualisation is considered the least useful and least likely to be used. The latter is the only approach that does not use statistically processed data but rather presents three simulations spanning the uncertainty range, retaining plausible spatial patterns of change. Notably, producers rate the usefulness of this approach significantly lower than non-producers (mean response of 3.15 compared to 4.77; t(31) = 2.43, p = 0.021, d = 0.85). For set 4 (full ensemble), usefulness is found to be higher for the group B visualisation (ordered, no model names) compared to the group D visualisation (ordered, includes model names) (t(74) = 2.94, p = 0.004, d = 0.66). This suggests that including model names decreases their usefulness to policymakers; however, the comparison between the group A visualisation (randomised, no model names) and group C visualisation (randomised, includes model names) is not found to be significant. There is no evidence of increased perceived usefulness when sorting the projections from driest to wettest.
Many comments provided focused on the value of information for use in policy and decision-making, with many respondents expressing a need for further information on the reliability of the models. For set 4 (full ensemble), many respondents expressed concern that the visualisations would confuse policymakers and "serve to undermine scientific credibility", while others felt it was important to present and communicate the uncertainties to inform robust decisionse.g. "this is the most honest way of displaying this type of model output".

Interpreting climate change projections across countries
Questions used for set 2 were altered slightly to account for the different information content in the sets 3 and 4 visualisations. Respondents were asked, "which country shows the greatest potential for much [wetter / drier] conditions in the future?" The results for set 3 (three panel summaries) are shown in Figure A4, Annex 1, Supplementary Material.
For set 3, the majority of respondents in group A (driest, middle, wettest) and group B (min, mean, max) stated that Botswana had the greatest potential for wetter conditions. Conversely, only one respondent in group C (10th, 50th, 90th), and no one in group D (25th, 50th, 75th) selected Botswana, with Mozambique as the most selected answer; recall that Mozambique was the most selected country for the "least change" when using the set 2 visualisations (ensembles means)see Section 3.3. On the greatest potential for drier conditions, the majority of groups A and B selected Zambia, group C evenly selected either Botswana or Namibia, and the majority of group D selected Botswana. Between-group differences arise due to groups A and B being provided with information on the full range of available projections (expressed in different ways), while groups C and D received percentile information, where the driest and wettest projections are excluded from the information presented. For set 4 (full ensemble), Botswana is the most selected country with the greatest potential for wetter conditions. For drier conditions, Zambia is the most selected country except for group C where there is an even split between Zambia and Botswana. The results for set 4 appear influenced by earlier exposure to visualisations in set 3, highlighting a carry-over effect from the survey design, discussed further in Section 5.2. For example, in set 3, the majority of respondents in groups A and B selected Botswana as having the greatest potential for wetter conditions. And while some respondents in these groups selected Zambia when viewing set 4, more than half continued to select Botswana, contrasting with groups C and D where almost no one selected Botswana for set 3 and fewer than half of people selected Botswana when viewing set 4.
Overall, these results show that the countries determined to be most affected by climate change can depend on the information content of the visualisation. This could affect what information is included in summary communications to policymakers.

Interpreting the range of projections for a single country
Understanding the full range of projections can be important for many climate change adaptation applications (Daron et al. 2015b;Kundzewicz et al. 2018). In order to understand differences in interpretation of a multi-model range when data is summarised and displayed in different ways, the study asked a common question across all visualisations used in sets 2, 3 and 4: "What range of precipitation change values would you communicate to adaptation policymakers in Botswana?". Botswana was chosen as an illustrative example and because the projected changes across the ensemble show large model variations, with less than 2/3 of simulations agreeing on the sign of the change. Figure 5 shows the answers of each respondent across all 12 visualisations. Individual bars show the ranges provided, from driest to wettest expected precipitation changes. Background colours are used to help show values for wetting and dryingi.e. above and below zero change. The mean of the ranges provided for each visualisation is included in bold text.
Results show very large differences in responses for the same visualisation and across visualisations. Ranges given for set 2 (ensemble means) are very small in comparison to the ranges given for set 3 (three panel summaries) and set 4 (full ensemble). Only 2 of 167 respondents provide ranges exceeding 20% when viewing set 2. Also, despite the set 2 visualisations shown to groups C and D providing information on model agreement in the direction of change, very few respondents appear to take this information into account. Botswana is not covered by stippling (Fig. 2c) or hatching (Fig. 2d), showing that both increases and decreases in precipitation are projected by ensemble members. Yet only 8% of group C and 12% of group D provide ranges that include an increase in precipitation.
For all four groups, we find significant differences (p < 0.005) in the precipitation ranges respondents would communicate across the three sets of visualisations, with the exception of group A (driest, middle, wettest in set 3) and group B (min, mean, max in set 3) where respondents did not significantly change their responses for set 4 (full ensemble). By contrast, comparisons for group C (10th, 50th, 90th in set 3) and group D (25th, 50th, 75th in set 3) show significantly smaller ranges for set 3 compared to set 4 (t(71) = 2.92, p = 0.005, d = 0.70 for group C; and t(92) = 9.60, p < 0.0001, d = 1.97 for group D).
When respondents were shown the set 4 visualisations (full ensemble), responses converge across the groups with average ranges of between 54.5 (group A) and 59.5% (group D) and a mean range across all respondents of 56% (from 25% drier to 31% wetter). However, some respondents continued to communicate very small ranges of only a few percentage points. Crucially, we do not find any significant differences in responses when subsampling by producers and non-producers.
Respondents were asked to explain their answers. Some respondents explained that they selected a range based only on the data presented in the visualisations, while others also used expert judgement, for example, by inflating the range beyond the information provided to account for model uncertainty. One respondent stated, "the exact number is -6 to -12%, but that answer gives a false accuracy of the change. I would expand the range". Another respondent expanded the range to ± 100%, explaining that "without further information about the ensemble spread or the degree of confidence attributed to the models by the scientists, it is impossible to provide a scientifically justifiable figure for the uncertainty". Some respondents also commented on the reaction their answers might elicit from policymakers. For example, "(I) did not use 10 th to 90 th as these are very wide and likely to lead to panic rather than action guesstimated an interquartile range". Conversely, one respondent said that the full range should be communicated: "it is the extremes that matter to adaptation policymakers".
In assessing the qualitative data gathered, we find that the visualisation approach influences the ability or willingness of respondents to provide summary messages for policymakers. Thirty per cent (32 out of 105) of comments provided for set 4 (full ensemble) demonstrate an inability or reluctance to provide a summary based on the visualisation compared to 14% (19 out of 137) of comments for set 2 (ensemble means)see shaded cells in Tables 2 and 3, Annex 3, Supplementary Material. Nine respondents refused to provide any information to policymakers using set 4 visualisations, compared to five and two respondents refusing to provide information based on sets 2 and 3, respectively. Comments on set 4 include "Look at another plot, this one is not useful" and "I actually have no idea. It is difficult to weight or average the figures from a visual inspection". This implies that the choice of visualisation approach used to communicate climate projections may not only impact on understanding and interpretation of the data, but also on if and how such information can be translated for policymakers.

Main findings
Prior research shows that different approaches to summarise and visualise climate projections can impact on interpretation, but findings are often general and difficult to apply in practice. Here, we aim to inform practical visualisation choices, focusing on communicating multimodel climate projections of future precipitation change and providing empirical evidence that demonstrates the strengths and limitations of different commonly used approaches. Overall results show large variations in interpretation between individuals viewing the same information, both within climate information producer and non-producer groups.
Addressing the first two research questions (Section 1.3), results show that basic visualisation choices can influence interpretations of climate projections. We find that using absolute versus percentage change information can impact on interpretation of climate projections, with potential to alter summary messages used in communicating such visualisations. We also find differences in the uptake of information, with more respondents stating they would use percentage change information in their work. Finally, we show that gridded versus contoured data can affect interpretation, though the effects are modest and impacts on perceived scientific credibility and use are not found to be robust.
In addressing the third research question, the study examines interpretations of the climate projections and preferences for different approaches. We find that respondents are more likely to use three-panel summary visualisations where data has been statistically processed (e.g. using percentile information) than visualisations showing individual projections spanning the uncertainty range, suggesting that preserving plausible spatial patterns of climate projections is not considered a priority.
When assessing the range of precipitation change values that respondents would communicate to policymakers in Botswana, we find that choices in how to summarise and display ensemble data lead to very different, and arguably distorted, interpretations of the underlying data. In particular, we find that very few respondents appear to use information on model agreement in the direction of precipitation change on visualisations showing ensemble means (set 2, Fig. 2 c and d). When provided with model agreement information, indicating that both increases and decreases in precipitation change are possible in Botswana, fewer than 12% of respondents communicated the possibility of wetter conditions in the future to adaptation policymakers. This is particularly notable given this visualisation approach is used in the IPCC AR5 summary for policymakers (IPCC 2013a). Moreover, for all visualisations tested, very few people communicate precipitation change values outside of the range of values provided in the visualisations. There were some exceptions where respondents inflated the range based on their understanding of model uncertainties, thereby employing expert judgement.
Overall, results demonstrate a trade-off in achieving richness of information and salience in the communication of climate projections (consistent with findings of Stephens et al. 2012), which was recognised by some respondents in the comments provided. Through testing a variety of approaches to summarising and displaying multi-model projections, we find that approaches containing more information are considered to be less useful than those which show limited summary information. After viewing the set 4 visualisations (full ensemble), one respondent stated: "I would never show a policymaker or decision maker individual model results, it would be too confusing for them to interpret. Always use an ensemble mean". Consistent with comments from other respondents, this statement demonstrates that in providing climate projections to inform societal decision-making, there is often a preference to show summary information to improve accessibility and policy relevance. However, our results show distorted interpretations of summary information, in particular for visualisations of the multi-model mean (set 2), highlighting the risks of simplifying ensemble projections and using approaches that do not clearly articulate uncertainties.

Limitations and further research
Only a limited number of visualisations could be assessed in our study. Future studies should include variations on the visualisations tested, including different underlying datasets (e.g. using different climate models and RCPs), different aesthetic attributes (e.g. colour) and alternative ways of summarising ensemble data.
High levels of comprehension (see Section 3.1) means we have confidence that differences in interpretation across the groups arise from differences in the visualisations tested. However, our samples are highly educated, with respondents largely working in academia, science-based organisations and in technical roles. Therefore, findings may not apply to broader audiences.
The online survey approach limits how much we can understand about individual thought processes when answering questions. In particular, testing successive visualisations based on the same underlying dataset introduces carry-over effects, meaning responses may be influenced by exposure to previous visualisations. This study attempted to reduce this effect through careful design, subsampling, sequencing the visualisations and layering the data shown throughout the survey so that the full ensemble was only made visible in the final set of visualisations, but the effect cannot be removed entirely. Further work should include methods that enable more in-depth analyses of individual interpretations.
This study targets a subset of the approaches used to communicate multi-model climate projections. We recommend further research on how other approaches, such as interactive visualisations available in web-based portals, influence understanding and interpretation. Finally, further research is needed to examine how such visualisations are used in real-world decisions and policy contexts.

Concluding remarks
Interpretations of climate data visualisations vary substantially amongst individuals, due to a combination of factors including different professional experience, scientific literacy, backgrounds, perspectives, education and values. In applying climate projection information to real-world decision-making contexts, there is a need to account for interpretation uncertainty, which may be comparable in its impact to other sources of climate uncertainty (i.e. model, scenario and initial condition uncertaintysee Hawkins and Sutton 2009). Furthermore, different views and assumptions about what constitutes policy-relevant information will affect the messaging and quantification of future climate risk communicated to policymakers.
Based on the findings of this study, we cannot recommend a single "best" approach for summarising and communicating multi-model climate projections of future precipitation change that will apply universally. However, in the context of climate change adaptation, where information about the full range of projections is often needed, results show that commonly used summary approaches, such as the use of percentile information and in particular the multi-model mean, can lead to distorted interpretations that underrepresent the range of uncertainty in climate projections. We conclude that regular testing of visualisations with target audiences, using appropriate user engagement and climate service co-production approaches (Hewitt et al. 2017;Carter et al. 2019), will help reduce the potential for misinterpretation and lead to the development of more robust and useful visualisations of climate projections.
in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.