Keywords

1 Introduction

Science communication has been defined as encompassing “all forms of communication by and about the sciences, within science (professional audience) as well as in the [broader] public sphere (general audience)” (Acatech 2017, p. 20; cf. Bubela et al. 2009; Bucchi and Trench 2014; Schäfer et al. 2015). This broad understanding of science communication includes all kinds of communication focusing on scientific work or scientific results, be it within science or to non-scientists, in one-directional or dialogical form (Kahan et al. 2017; Schäfer et al. 2019; Trench and Bucchi 2010). It also includes communication about the natural sciences, the arts or the humanities, and it has considerable overlaps with research fields such as health communication and risk communication.

Analyses of science communication emerged in the late 1960s, at the intersection of science education, social studies in science, mass communication, and museology (Trench and Bucchi 2010; Schäfer et al. 2019). The aim of this research was and still is “to understand the underlying mechanisms, structures, and effects of science communication [, and in doing so, science communication research] produced a large number of empirical studies on the content of science communication” (Schäfer et al. 2019, p. 77).

Meta-analyses and literature reviews have reconstructed the development of the research field and showed several trends. They have demonstrated that the research field grew in general, with more annual publications and an increasing number of research journals (Guenther and Joubert 2017; Rauchfleisch and Schäfer 2018; Schäfer 2012b). They also showed that content analyses are prevalent in research on science communication, (e.g., Bucchi and Trench 2014; Guenther and Joubert 2017; Kahan et al. 2017; Schäfer 2012b; Trench et al. 2014), that their number has grown significantly in recent decades (Guenther and Joubert 2017; Schäfer 2012b) and that they diversified in their research objects (Schäfer et al. 2019): they have internationalized and analyzed content from a more diverse set of countries (Guenther and Joubert 2017; Schäfer 2012b), analyzed more and more different media (Metag 2017; Schäfer 2017), and more scientific disciplines (Cassidy 2005; Schäfer 2012b; Summ and Volpers 2016).

2 Common Research Designs and Combinations of Methods

Meta-analyses have revealed a variety of research strategies and designs that are used in content-analyses of science communication: quantitative and qualitative content analyses including variants of linguistic (Fløttum 2016) and discourse analyses (e.g., Barr 2011; Koteyko and Atanasova 2016), comparative and cross-sectional studies comparing media outlets (e.g., Cacciatore et al. 2012; O’Neill 2013; Pellechia 1997), different countries (e.g., Metag and Marcinkowski 2014), and temporal developments (e.g., Clark and Illmann 2006; Dudo et al. 2011).

Both qualitative and quantitative content analyses are used equally often in the field, and that relation has remained roughly constant over time; albeit only a few publications combine both research strategies in the same study (Schäfer 2012b). Comparative (i.e., cross-sectional and/or longitudinal) studies make up more than every second analysis, but many researchers also employ case-study designs, analyzing one case in detail (Schäfer 2012b). Recently, analyses of science-related traditional media content have increasingly been supplemented by studies of online and social media communication (cf. Brossard and Scheufele 2013; Schäfer 2012a; e.g., Veltri 2013; Veltri and Atanasova 2017; Wang and Guo 2018).

Researchers in the field often use content analyses in combination with other methods. Studies have combined manual and computational content analysis, such as to detect intermedia agenda setting (Wang and Guo 2018). Content analysis is also frequently combined with population surveys to analyze communication effects, such as framing effects or cultivation. These studies often use content analyses to identify relevant depiction styles or media frames and then analyze the impact of these representations via surveys (Arlt and Wolling 2016; Dudo et al. 2010; Guenther et al. 2015; Guenther and Kessler 2017; Kessler 2016; Zimmermann et al. 2019). Recent studies also rely on content analysis as a method of evaluation; for example, in Kessler and Guenther (2017) and Kessler and Zillich (2018), eye-tracking recordings are coded by means of content analyses to investigate online search, selection, and reception behavior on scientific topics.

3 Main Constructs Employed in Science-Related Media Content Analyses

Content analyses on science communication have scrutinized diverse objects, issues, and fields. Most of them are single-discipline analyses, primarily (more than 80%) about natural sciences or related research fields such as biotechnology (e.g., Schäfer 2009), medical research (Ruhrmann et al. 2015), climate science (Painter et al. 2016; Schäfer 2012b); nanotechnology (e.g., Anderson et al. 2010; Metag and Marcinkowski 2014), cloning (e.g., Holliman 2004), evolutionary psychology (Cassidy 2005), and astronomy (e.g., Kiernan 2000). The studies analyze the depiction in different media, such as newspapers (e.g., Gavin 2009a), TV (e.g., Göpfert 1996; Kessler 2016) or websites (e.g., Madden et al. 2012), and different countries (albeit with a clear bias toward ‘Western’ countries; for an overview, see Schäfer 2012b).

Despite this diversity, several commonly analyzed constructs can be distilled from the field. Such common analytical foci of media content analyses are

  1. 1.

    the overall amount of scientific content: Individual studies and meta-analyses (for an overview, see Bauer 2011 and Schäfer 2017) have investigated the growth in science-related media coverage mostly in traditional, print media in different countries (e.g., Bauer et al. 2006 in the UK and Bulgarian media; Clark and Illman 2006, Nisbet et al. 2003, and Pellechia 1997 in the US; Elmer et al. 2008 in Germany; Bucchi and Mazzolini 2003 in Italy; Vestergård and Nielsen 2017 in Denmark). They have shown that the amount of science-related media coverage grew until the 2000s, when the growth tailed off and stagnated (e.g. Bauer 2011).

  2. 2.

    the representation of different actors or sources in media reporting: Studies examined the importance of different actors in media content and how they are depicted; focusing mostly on scientists and their roles, gender or presentation (Albaek et al. 2003; Fähnrich and Lüthje 2017; Niemi and Pitkänen 2017; Van Gorp et al. 2014). Content analyses of German newspapers and TV-talkshows demonstrated that scientists are quite visible in media coverage (Fähnrich and Lüthje 2017; Kessler and Lachenmaier 2017).

  3. 3.

    evaluations of science and fundamental “modes” of coverage: Studies have analyzed the evaluation of science in media coverage (e.g. how critical, affirmative, or diverse the depiction is; cf. Bauer et al. 2006; Elmer et al. 2008; Nelkin 1995; Schäfer 2009; Vestergård and Nielsen 2017) and identified different modes of media coverage. Characteristic of the research of science communication is the analysis of the “popularization” (Peters, 1994), “science du chef” (Bucchi, 1998), and “mediatization” (Schäfer, 2009) modes of coverage. The popularization and science du chef modes are characterized by presenting scientific information that is explained by scientists or journalists but not problematized or critically questioned. In the mediatization mode, general criteria for journalistic reporting and the media apply to science coverage. These coverage mode often appear outside the science sections of newspapers, is triggered by socio-political or -cultural events, relies less on scientific sources, and is more confrontational and conflictual (Peters 1994; Schäfer 2009).

  4. 4.

    the accuracy of the reporting as measured by scientific standards: Studies often tried to assess the accuracy of media coverage about science by comparing it with either scientific publications (e.g., Ankney et al. 1996) or press releases (e.g., Brechman et al. 2009; Sumner et al. 2014). These studies focused on the accuracy of online representations of science and scientific findings, often driven by the assumption that the lack of quality control and journalistic gatekeeping online might result in substandard portrayals of different scientific issues (cf. Barr 2011; Cacciatore et al. 2012; Gavin 2009b). A major focus of these content analyses was the investigation of the uncertainty presentation of scientific evidence among different strategic communicators, media, and/or regarding scientific issues (cf. Dudo et al. 2011; Guenther et al. 2019; Kessler 2016; Stocking and Holstein 2009) and especially in risk communication (Anderson et al. 2010; Arlt and Wolling 2016; Cacciatore et al. 2012; Mellor 2010). Media coverage always deviates to some extent from scientific publications; the coverage is often more exaggerated and sensationalist (e.g. Knudsen 2005), simplified, and devoid of complex issues (Brechman et al. 2009), uncertainties are either not or falsely represented (Dudo et al. 2011; Guenther et al. 2019; Kessler 2016; Stocking and Holstein 2009).

  5. 5.

    the framing of science and scientific findings: With the shift from one-sided science communication toward more dialogical science communication that understands media as more than translators of science, the analytical focus of content analyses has also changed (Schäfer et al. 2019). Arguably the most important analytical focus was on how science is “framed” in media reporting. Framing research has shown that different facets of science are selected and made salient in media coverage about different issues (e.g., Durant et al. 1998 for genetically modified organisms; Kessler 2016 for medicine; Ruhrmann et al. 2015 for molecular medicine; Nisbet et al. 2003 for biotechnology; Zimmermann et al. 2019 for genetic testing; Gerhards and Schäfer 2009 for human genome research; Schäfer and O’Neill 2017 for climate change). Some studies compared the frames for various media (e.g., Boykoff 2008; Carvalho 2007) and countries (e.g., Boykoff and Boykoff 2007); cross-national framing analyses across longer time spans have even tried to develop generic frame sets that work across topics (e.g., Durant et al. 1998).

  6. 6.

    visualization of scientific issues: Content analysis studies also examine visual science communication (i.e., the images used for various topics; Metag 2018; e.g., Rodriguez and Asoro 2012 on genetics and Wynn 2017 on nuclear power). There is a strong research focus on the topic of climate change in this context (e.g., Doyle 2007; O’Neill 2013; Wozniak et al. 2017). It turns out that here some images are used so prominently in public communication that they achieve an iconic status (Metag 2018).

4 Research Desiderata

A fundamental challenge for future research is to broaden the base of available knowledge about science-related content. For that, it would be useful to produce more comprehensive and comparative analyses that include different media, including online and mobile media, focus on different scientific issues/disciplines, and assess science communication in different countries, ideally over time. In particular, these studies should focus on aspects science-related content analyses that are systematically underresearched (Schäfer 2012b; Schäfer et al. 2019), such as non-Western countries, non-print media and especially online communication, and disciplines beyond STEM (science, technology, engineering, and mathematics) subjects.

Furthermore, content analyses should account for new communicative developments. For example, it should focus more on the increasingly multimodal nature of current communication that encompasses textual, visual and other elements (e.g. Wozniak et al. 2015; Zeng et al. 2021), on new communicative forms such as memes and gifs (e.g. Lynch 2008), or on analyses of mobile communication (Taipale and Fortunati 2014).

In addition, the field would benefit from a more thorough attempt to share, re-apply, and standardize research instruments. So far, most individual studies have developed their own instruments, with very little exchange and standardization. This would also involve a more conscious and concerted effort to publish research instruments and data in the first place, in order to allow other researchers to reproduce instruments and measurements in as much detail as possible.

Relevant Variables in DOCA—Database of Variables for Content Analysis.

Scientific evidence/uncertainty: https://doi.org/10.34778/2h