This section evaluates the final texts of the FAQs (Section 4.1) and as well as their reception on social media (Section 4.2).
Text analysis
For this essay, we use two statistical methods to assess the sentence structure of the FAQs and the amount of potentially technical terms included in the texts. Such methods can be useful to analyse large amounts of texts and provide initial, but limited, insights into their readability.
The Flesch Reading Ease index (FRE) assumes that readability is determined by the number of syllables per word and the number of words per sentence (Flesch 1948; Schriver 1990; DuBay 2004; Marnell 2008).Footnote 4 In use for more than 70 years, the FRE has previously been applied in the IPCC context (e.g. Barkemeyer et al. 2016; Stocker and Plattner, 2016). However, the FRE does not assess the actual words used or their technicality, the grammar or style of a given text, nor the context or general reading ability of its readers.
All FAQs of the three special reports score between fairly difficult and very confusing, which is below a ‘standard’ score or plain English.Footnote 5 Such scores, however, have even been observed for some Wikipedia entries or New York Times articles (Stewart 2017). Co-drafting between the authors and communication experts—as was the case for SR1.5 and SROCC (Section 2)—led to smaller ranges and higher median of the FRE. The FRE scores of the Special Reports of the Sixth Assessment Cycle range similarly to the FAQs of previous cycles, with a median score for SR1.5 and SROCC slightly higher, and for SRCCL slightly lower than previous reportsFootnote 6 (Fig. 2a, Online resource 2).
De-Jargonizer assesses the technicality of the language used in texts. This tool automatically determines how common words in a given text are, by comparing them to a database of words used by the BBC website (Rakedzon et al., 2017).Footnote 7 Words are assigned frequency levels—common, mid and rare—resulting in a ‘suitability score’, with higher scores indicating less technical texts. Similar to FRE results, de-Jargonizer scores show that the FAQs from SR1.5 and SROCC perform better than SRCCL, with the latter special report relying more heavily on mid-frequency and rare words (Fig. 2b, Online resource 2).
Our test reveals that words such as ‘livelihood’, ‘greenhouse’ or ‘adaptation’ fall into the mid-frequency category. Examples of rare words include ‘overfishing’ and ‘overshoot’. Being an element of IPCC reports, FAQs will always include terms such as these. Explanations using common words will aid the readability of the text for broader audiences, including non-native English speakers. This can nevertheless be challenging in shorter FAQs.
Automatic text analyses offer simple and fast indications of sentence complexity and language technicality on large amounts of text. Efforts that build on these initial insights, drawing further on the expertise of communications specialists, will likely make the FAQs more comprehensible. It is also worth considering that the increasing complexity of science-related texts might reflect the progress in the underlying science and a potentially growing capability of interested readers (Barkemeyer et al. 2016). A more detailed evaluation of the IPCC FAQs against the background of increasing context knowledge among users and popularity of climate-related topics would help to better tailor these texts to their audiences.
Social media statistics of FAQ-related posts
To assess the impact of the FAQs on social media, we looked at their performance after being shared on the official IPCC Twitter and Facebook accounts. The main metric considered is the engagement rate—i.e., the number of interactions with a post (e.g. clicks, likes, shares/retweets) divided by the total number of impressions (the number of times a tweet appears to users in either their timeline or search results). We look at the effect of sharing content on social media depending on the type of content (Section 4.2.1) and at the content from each Special Report (Section 4.2.2).
Statistics on the ‘type’ of content
Social media posts about FAQs typically include the title of the question, a link to the answer on the report website and a visual that can take 3 formsFootnote 8: (i) a figure, which comes from the FAQ itself; (ii) a card, which is a background photo with the name of the FAQ and the special report; or (iii) a gif, which is the dynamic version of a card; i.e. with a video in the background, instead of a photo. Posts with figures from the FAQ are the most popular type of content. They generate higher engagement rates (Fig. 3a, c) for Twitter and Facebook. On Twitter, posts with figures attract 1.5 and 2.1 times more clicks to the actual FAQ on the IPCC website than cards and gifs, respectively. Figures, especially those including a succinct answer as in SR1.5, are the only type of post that presents actual content of the FAQ, which could be another reason for their relative success on social media. Card and gif differences are more complex: generally, while gifs seem to trigger higher engagement rates, they also tend to be slightly less viewed than cards and they trigger less clicks on their links to the actual FAQ.
Statistical comparison of the different Special Reports.
SR1.5 shows higher engagement rates of posts compared to the SROCC and the SRCCL (Fig. 3b, d). Figure posts of the SR1.5 led to engagement rates that are 25 and 17% higher on Twitter and Facebook, respectively, compared to cards and gifs post of the same report (not shown in Fig. 3). However, even non-figure SR1.5 posts show higher engagement rates than the two other reports on Twitter (not on Facebook). This implies that while the ‘figure effect’ may be part of the reason for higher performance, and it cannot explain all of the differences between the reports. This could be explained by a broader public interest in the topics covered in the SR1.5 compared to the SRCCL and SROCC and/or by the fact the user survey actively influenced the choice of FAQs answered by the report (Section 3.1).
Therefore, social media statistics strengthen the case for the inclusion of figures in the FAQs, which was already highlighted by the user survey.