Skip to main content

What do COVID-19 Tweets Reveal about Public Engagement with Nature of Science?


Using the social media platform Twitter, this study explores public reference to “scientific method(s)” in tweets specifically pertaining to COVID-19 posted between January and June 2020. The study focuses on three research questions: When did reference to scientific methods peak, which aspects of nature of science (NOS) do these tweets address, and the extent to which Twitter users’ sentiments provide useful information about their attitudes towards the scientific method. COVID-19 tweets were mined and queried using “scientific method(s)” as a keyword. A content analysis using the Family Resemblance Approach (FRA) to NOS and a non-computational sentiment analysis were conducted on the obtained data set. The findings revealed that tweets using science method(s) peaked most during the months of April and May, as more information was being communicated about promising treatments and vaccine development. Most tweets were assigned multiple FRA categories. The sentiment analysis revealed that attitude towards the scientific method was predominantly supportive. Discussion of three events that were observed in clusters of tweets provided additional context. The paper concludes by noting the methodological affordances and limitations of applying the FRA for identifying NOS-related content in Twitter environments and underscoring the potential of targeted NOS messaging in promoting informed discussions about NOS in the public sphere.


The emergence of the novel COVID-19 as a global pandemic has been by far one of the most palpable public health crises that have faced humankind worldwide in the recent past, affecting people from all walks of life within a very narrow temporal margin that generated panic for several reasons. Very little was known at the beginning of reporting infected cases about the disease’s mode of transmission, its contagious rate (R factor), and the mechanisms that accounted for the fatal complications in its aftermath. The quick rate of infection and the rising death toll in its wake led many countries to enforce lockdowns in an effort to control the spread of the disease. This would allow medical personnel to manage the explosion in infections and use the limited resources to reduce deaths from the complications of the disease. Concomitant with lockdown measures, rising economic pressures threatened the loss of jobs in several sectors of the economy and contributed to a heightened level of anxiety and concern. While most countries opted to limit the spread of the disease by enforcing strict measures (lockdowns or limitations or public gatherings), few followed a “herd immunity” approach that involved fewer restrictive measures.

The carefully formulated and scientifically accurate information about COVID-19 disseminated by credible health authorities in mainstream and social media stood in stark contrast with much of the random exchanges taking place about the pandemic by the public on social media. While scientific facts related to COVID-19 were being shared periodically, the details paid little attention to clarifying companion NOS issues, unwittingly contributing to confusion about these facts or to the reluctance to accept them. This became visible in connection with at least three instances: face mask effectiveness, novel treatments, and vaccine viability. First, there was the initial announcement that non-surgical face masks are not effective at limiting the spread of the disease, followed by the claim that regular non-surgical face masks are effective when maintaining social distance. Later announcements offered nuanced clarification regarding different levels of effectiveness depending on the type of mask worn. Lack of clarity regarding the reasons behind seemingly contradictory directives made them seem even more puzzling and confusing (Jingnan, 2020). New therapies heralded by some politicians and the media in the early stages of treating the disease provided the public with a mix of hope and confusion about why these treatments were not more readily endorsed at the time given the much publicized anecdotal evidence (Chamary, 2021, Dos Santos, 2020). The promise of effective vaccines to be produced in record time compared to earlier vaccine production histories that took 10–15 years, left out details about advancements in biomedical technologies that make this possible and the reassuring procedures used to monitor effectiveness during early phase trials to safeguard the wellbeing of research participants (Brothers, 2020). In other words, there was considerable information shared about COVID-19, but not enough was shared about the nature of science aspects that support citizens in placing this rapidly evolving knowledge in perspective and possibly improving factors of trust and compliance. Allchin (2020) recounted many examples of how the public dealt with a range of issues pertaining to expert advice and contested claims, dubious therapies, and conspiracy theories, noting that “the COVID-19 crisis has dramatically underscored the need for functional scientific literacy” (p. 1).

Functional scientific literacy involves a substantive understanding of the nature of science. Even though acceptance of scientific theories by the public is influenced by their political identities or religious affiliation, a recent study revealed that better knowledge of NOS has been found to relate positively to accepting scientific knowledge (Weisberg et al., 2020). Such acceptance does not guarantee action but is a pre-requisite for it—a matter of interest for those seeking public compliance with public health directives. Analyzing how the public alludes to aspects of NOS relative to a current issue that impacts them, like COVID-19, may inform public health officials and science communicators on how to package their messages to support understanding of the epistemic and social dimensions of science typically opaque to the general public. This is especially important given that “science communicators may inadvertently play to the public’s confusion about both the changing nature of science and the nature of certainty” (Sinatra & Hofer, 2016, p. 248).

With increased social distancing and in some cases isolation due to the pandemic, there has been an increased use of social media (e.g., Facebook, Instagram, Tiktok, and Twitter) as a go-to platform to share views about life and coping mechanisms during the pandemic (Boursier et al., 2020, Pacheco, 2020). Users shared facts, opinions, myths, and conspiracy theories (e.g., Sharma et al., 2020). They also commented on the day’s latest news or retweeted it. Public health agencies (such as the World Health Organization [WHO] and Center for Disease Control [CDC]), governmental ministries of health as well as academic and private research institutes around the world took to Twitter to educate the public and communicate recommendations for controlling the spread of COVID-19. Highly regarded scientists used Twitter to disseminate videos in which they modeled proper hand-washing routines (Collins, 2020) or to explain what virus mutations involve (Gupta, 2020).

While research on the scientific content of these tweets is available (e.g., Sharma et al., 2020), research on NOS in these tweets is not. This is due in part to the challenge of identifying a NOS indicator that will help distinguish between tweets that are focused on science information from others focused on aspects about science and scientific activity. One proxy to identifying NOS-related tweets in a keyword search is the familiar phrase “scientific method.” In a brief and succinct account of its changing meaning and function over time, Thurs (2015) stated that “scientific method is a keyword (or phrase) that has helped generations of people make sense of what science was even if there was no clear agreement about its precise meaning—especially if there was no agreement about its precise meaning” (p. 212). Using this phrase would solve the practical problem of identifying tweets with a NOS orientation from the hundreds of millions of tweets that are focused on information, advertisements, or general opinions. It enables the pursuit of questions that can begin to tackle public views about science relative to the COVID-19 pandemic.

This study takes place at the intersection of Twitter, COVID-19, and NOS (Fig. 1). It specifically focuses on what references to “scientific method(s)” in relation to COVID-19 tweets reveal about public engagement with NOS. This exploration focuses on the following research questions:

  1. (1)

    When did mentions of “scientific methods” peak during this time period (1st five and half months of 2020)?

  2. (2)

    Which aspects of NOS do these tweets address?

  3. (3)

    Does a non-computational analysis of Twitter users’ sentiment provide additional information about their attitudes towards the scientific method?

Fig. 1

The focus of this study lies at the intersection of Twitter, Science (COVID-19), NOS (scientific methods)

Choosing Twitter for this study is justified by its growing popularity in academic research, and the fact that Twitter data “are more openly accessible” compared to other social media platforms (Ahmed et al., 2017). While Twitter ranks lower than Facebook and Instagram in terms of numbers of users, it is described as “a very popular medium to communicate breaking news, digest bite-sized content, and communicate directly with … users in real-time” (Robinson, 2020). Ahmed et al. (2017) listed four additional reasons that include ease of finding and following “conversations,” strong widespread hashtag culture that facilitates retrieval of data, tendency to receive more mainstream media attention, and higher familiarity to academics due to its use in professional contexts (like conferences). Other platforms, such as Instagram, would not have been appropriate for this study due to the heavy reliance of the platform on images rather than text discussion. Facebook on the other hand is known as a platform that is primarily used to connect with contacts rather than to express ideas (Forsey, 2020) with a broader audience, making it less likely to be effective for this type of analysis.

Social media platforms such as Twitter offer a ubiquitous and increasingly convenient source of information about current issues for adolescents and adults. Critical consumption of this content necessitates a higher level of digital literacy—which involves “the skills necessary to access, analyze, and evaluate all forms of information and communication” (Berson & Berson, 2003). Without that critical stance, social media users have no “compass” to vet science-related information or verify its credibility. Gaining knowledge of what ideas about science are being shared on these public platforms can help science educators and science communicators develop tools to support functional scientific/digital literacy. This study aims to contribute to this knowledge by identifying trends in NOS-related content and sentiments depicted in public exchanges on Twitter in the context of COVID-19.

Theoretical Framework

The NOS in science education refers to those aspects of science and scientific practices deemed appropriate for inclusion in K-12 settings. NOS is an essential component of scientific literacy and scientific proficiency as has been emphasized in the most recent rounds of K-12 reforms in the United States (American Association for the Advancement of Science [AAAS], 1989, National Research Council [NRC], 1996, NGSS Lead States, 2013). Considerable research has been conducted on NOS over the last several decades which has been summarized in a number of recent reviews that provide a rich description of the landscape (for example, Abd-El-Khalick, 2014, Cofré et al., 2019, Deng et al., 2011, Lederman & Lederman, 2014). This section will focus squarely on NOS studies most directly relevant to the research questions pursued in this study specifically, the scientific method and the conceptual framework used to analyze the tweets known as the Family Resemblance Approach to NOS.

The scientific method, as traditionally expressed in textbooks and teaching resources, has been a longstanding source of concern for science educators, because it is portrayed as the process that scientists use to arrive at credible knowledge. In its most common and simplest representations, it is described as a lock-step process that starts with observations, followed by making hypotheses, carrying out experiments, analyzing findings, and making conclusions. The focus on this rigid sense of scientific method and its variants in school science has been widely criticized by philosophers and historians of science and science education. Despite recurrent complaints about its inadequacies and efforts to promote better conceptions, it remains strongly entrenched in school science textbooks (Blachowicz, 2009; McDonald & Abd-El-Khalick, 2017).

How did the scientific method get into school science and what explains its longevity? In a well-documented and nuanced historical account about how science is taught, Rudolph (2019) described the origins and evolution of the scientific method in school science and provided some clues for why it remains with us till this day. Even though references to scientific method existed prior to this time period, around the middle 1880, “educators had accepted as axiomatic that the methods of the inductive sciences were the preferred means by which the students could learn” (p. 78). Tension was growing among practitioners between emphasis on the laboratory method focusing on measurement accuracy that included application of procedures believed to be closest to the practice of scientists and emphasis on the descriptive and practical aspects of scientific knowledge that also considered emerging theories about adolescent learning. The mounting tensions between the laboratory method and the utility value of science reached a crisis point as high school teachers struggled with having to choose between the two. Teachers tended to back away from the laboratory method to favor alternative presentation modes such as demonstrations but “remained committed to teaching, in some way, the power of scientific thinking” (Rudolph, 2019, p. 79) though it was not clear to them how to teach it.

According to Rudolph, Dewey’s vice presidential address at the AAAS Meeting in Boston in 1909 offered a potential solution that the teachers were looking for. He made the case that participating in laboratory exercises may get the student to acquire physical skills to handle scientific tools without ever realizing how these relate to scientific knowledge. Students needed to “see science as a way of reasoning” (Rudolph, 2019, p. 81). Dewey’s ideas published in his book, How we think, gave educators an alternative way of presenting science. Since all thinking is ultimately a problem-solving activity, he proposed that “effective thinking … began with a problem that was resolved through appropriate reflection and consideration of the consequences of the action taken for its resolution” (p. 89). Dewey saw in this the development of a generalizable scientific habit of mind that can be applied to other issues one encounters in everyday life. He proposed that a.

“complete act of thought” consisted of five “logically distinct” steps: (i) a felt difficulty; (ii) its location and definition; (iii) suggestion of possible solution; (iv) development by reasoning of the bearing of the suggestion; [and] (v) further observation and experiment leading to its acceptance or rejection.” (Dewey, 1910 as cited in Rudolph, 2019, p. 90)

These steps, which were influenced by Dewey’s association with physical scientists at the time and that were inspired by the epistemology of science, were intended to provide a way of thinking about common problems encountered in everyday life. Reviewers of the book soon interpreted the five steps as an abstraction of the scientific methods, and eventually, this interpretation provided a basic structure for teaching science in school. Teachers can now focus on the utility of science and the processes of science. What Dewey intended to be psychological steps that guided thinking in many domains would eventually become entrenched in schools as science educators found in it a more practical way to teach science than attempting to apply the logical elements of method proposed earlier. The popularity of the scientific method continued to rise despite the “war” that started against it by 1945 and continues to dominate textbooks and curricula much to the frustration of science education researchers, philosophers, and scientists.

Early objections to the scientific method preferred a “focus on scientific methodology and the scientific point of view” (Rudolph, 2019, p. 118). As typically presented in school science, the scientific method ignores the plurality of ways in which scientists conduct their work and consequently misrepresents the NOS. For example, not all scientific investigations use experiments, the role of modeling is totally ignored, and not all inquiries start out with observations or require hypothesis testing. By insisting on the application of specific components of the scientific method, its restrictive form risks becoming by default the go-to standard by students for distinguishing between scientific and non-scientific knowledge and practices as has been illustrated in cases where scientific knowledge contradicts personal or religious beliefs. (e.g., Dagher & BouJaoude, 2005).

Science education curriculum policy documents in the United States tend to use the more inclusive plural form “scientific methods.” Science For All Americans (AAAS, 1989), for instance, specifically refers to scientific methods of inquiry as one of three foundational aspects of NOS. Under the umbrella of scientific methods, students are expected to understand that “science demands evidence, science is a blend of logic and imagination, science explains and predicts, scientists try to identify and avoid, and science is not authoritarian” (AAAS, 1989, pp. 25–31). In the most recent science education reform, connections to NOS state that “scientific investigations use a variety of methods” and proceed to outline what this expectation involves for K-12 students (NGSS Lead States, 2013, p. 5). Over a century since its proliferation in science education, the scientific method in its lock-step form continues to be taught in science textbooks, despite ongoing efforts to change this trend. Woodcock (2014) proposed that its stubborn longevity in school science derives from the several pragmatic functions it seems to serve. For example, it offers an informational basis for how science works, offers students a template for writing their science reports, provides an access point for students to act as scientists, demarcates science as a rational field of inquiry, and elevates the level of objectivity associated with science. While these various functions help explain its persistence, they do not justify its unqualified use due to the many misconceptions it engenders about scientific practices.

Issues with the scientific method are often raised in NOS studies in science education. The myth of the scientific method is well-documented in the NOS literature (e.g., Allchin, 2004; McComas, 1996; Thurs, 2015). Advocates of the consensus view to NOS note that “the myth of the scientific method is regularly manifested in the belief that there is a recipelike stepwise procedure that all scientists follow when they do science. This notion was explicitly debunked” (Lederman et al., 2002). Advocates of the Family Resemblance Approach (FRA) to NOS take issue with the mythic method and refocus attention to aspects of scientific methods and methodological rules that are used within and across science disciplines (Irzik & Nola, 2014), arguing that the convergence of evidence taken from different methods is critical for supporting strong theoretical claims (Erduran & Dagher, 2014).

Having acknowledged issues with the narrowly conceived notions of scientific method in the context of school science, it is important to note that the phrase has been used more broadly to refer to a general notion of systematic and disciplined problem solving typically associated with scientific activity. As Thurs (2015) stated, “educators, scientists, advertisers, popularizers, and journalists have all appealed to [scientific method]. Its invocation has become routine in debates about topics that draw lay attention, from global warming to intelligent design,” (p. 211) even though its users may differ in what they mean precisely by that phrase. From a practical perspective, reference to scientific method or scientific methods as a keyword search phrase in a social media environment provided an accessible pathway for querying public thoughts about science on social media. This may not have been possible if a less familiar descriptor is used. From this perspective the construct “scientific method(s)” presents a concrete and feasible way to explore references to NOS in public conversations. Its widespread use in school science contexts, by scientists and nonscientists, made it the phrase of choice to query a Twitter data set to capture views that, in one way or another, relate to science and scientists in the context of COVID-19.

Studies that explore NOS in social media are almost non-existent presenting little guidance for performing such analysis. Given that social media posts are likely to be highly variable in focus, intent, and detail, especially in relation to COVID-19, the NOS framework that is selected to analyze this content has to be broadly conceived so as to capture the broadest possible range of content. Among existing NOS frameworks, the FRA seemed well-equipped to tackle this task due to its inclusion of cognitive, epistemic, social, and institutional aspects of science (Irzik & Nola, 2014) and related organizational, financial, and political components (Erduran & Dagher, 2014).

The FRA framework packs substantive NOS content. Each of its four cognitive-epistemic categories defines a range of Aims and values, Practices, Methods and methodological rules, and their importance in contributing to explanatory coherence and provides detailed characteristics about different forms of scientific Knowledge and how they contribute to its growth. Similarly, its seven social-institutional categories, refer to Scientific ethos, Social values, Social certification and dissemination, and Professional activities; all of which play a significant role in the conduct, evaluation, and certification of knowledge at the community level. In turn, all of this is influenced by how teams of scientists are organized (Social organizations and interactions), who finances their research (Financial systems) and what Political power structures moderate their interactions (Erduran & Dagher, 2014).

When used as an instructional framework, the FRA provides a holistic representation of critical NOS content that can be embedded in lessons using the categories as a guide. When used as an analytical framework, it enables the detection of aspects of NOS relative to the content described in each of its categories. When applied to a given text, the text is examined against the framework’s categories to determine which ones are being addressed. A text would be considered to have referred to the category of Aims and values, for instance, if it contains reference to matters such as coherence, rational arguments, accuracy, revision in light of evidence, critical examination, or empirical adequacy, to name a few. Underpinning each of the 11 FRA categories is a set of characteristics that guide the interpretation of text or discourse. A brief description of these characteristics is provided in the first column of the Appendix. Studies that employed the FRA to NOS framework in textbook analysis (e.g., BouJaoude et al., 2017; McDonald, 2017) have found it useful in identifying some distinctive features which would have not been noted in frameworks that are narrow in scope, contain fewer categories, or ones that are skewed towards one dimension.

This study builds on the existing NOS literature, yet veers outside the traditional focus of NOS on school science while remaining relevant to it. Its relevance derives from the fact that students inhabit a modern society that is not bounded by school walls or geographical borders but one that is super connected and accessible through digital tools and social media. While it is important to acknowledge that access to these tools is not universal and that users of Twitter do not fully represent the populations to whom they belong, exploring the complexity of public opinions regarding matters scientific in social media environments has the potential to inform and enrich curriculum choices.

Twitter Studies

Using Twitter as a data source, not as an educational tool, is rather novel in science education. After describing Twitter, the rest of the section provides a summary of select Twitter studies.

Unlike other social media platforms, Twitter was founded on the sharing of short snippets of text rather than multimedia. In fact, this was the only mode of communication on the social network with multimedia posts being added later (First Versions, n.d.). This creates a unique culture on Twitter where users have the ability to post multimedia to supplement their idea within a limited number of characters. The role of multimedia as a supplement to text makes Twitter a place to freely debate and discuss topics with a focus on brief snippets of text or images rather than large bytes of media. The limited amount of characters supplied to Twitter users also plays a role in the environment on Twitter, forcing users to be concise in the expression of their ideas and their responses. Twitter’s easy accessibility by social media users offers opportunities for researchers to explore trends across publicly expressed views on a range of current issues, events, and domains.

Twitter posts are only 140 characters long, and they concisely communicate statements that relay new ideas or thoughts about or reactions to other tweets. The content ranges from advertisements, scientific and pseudoscience claims, public health announcements, to personal views. The expressed views can be mundane or informed referencing a news article, a book, or a recent breakthrough. They can also express a variety of sentiments, which has prompted closer examination of the nature of these sentiments and the development of algorithms that capture them. As shown in this brief review, research on Twitter is growing substantially in many fields “such as sociology, computer science, media and communication, political science, and engineering to name a few” (Ahmed, 2019, para 2) and has been commonly used to detect trends in the fields of business and health sciences.

The COVID-19 pandemic created a unique situation: a public health crisis that infiltrated the lives of the general public at a global scale, disrupting normal life and prompting lay people and experts to air their views on social media. Since the start of the pandemic, computer scientists began mining tweets related to the pandemic to investigate various research topics of interest. One of these datasets which started mining COVID-19 related tweets on March 20th, 2020 has accumulated over 642 million tweets and counting (Lamsal, 2020). As a result of many of these datasets being both public and widely available, many researchers began investigating a wide variety of questions related to the pandemic and people’s perception of it on Twitter. Using similar datasets, one group analyzed the spread of misinformation on the platform between March and June 2020 (Sharma et al., 2020). They identified top misinformation hashtags, rise of misinformation by country, and emerging trends in COVID-19 discourse along four main categories: Unreliable (involving false or questionable news), Conspiracy (includes both conspiracy theories and scientifically questionable news), Clickbait (intended to grab attention), and political/biased (in support of a political orientation or view). Similarly, another group cross-analyzed how information spread on the platform and social media network analysis to gain a perspective on how effective South Korean public health officials were in disseminating information to the public (Park et al., 2020). Other groups tried to gain an understanding of how users perceived new COVID-19 policies through investigating Twitter data (Lopez et al., 2020). Many researchers have also attempted to understand how conspiracy theories start and spread; one group was able to track a notable conspiracy claiming that the Coronavirus was a hoax to a single account using Social Network Analysis (Gruzd & Mai, 2020). The information uncovered by these studies can benefit public health outreach efforts in designing targeted messages aimed at deconstructing disinformation. They are also useful for guiding discussions with students about science-related news they encounter on social media.

A recent study in science education examined biweekly conversations taking place during one-hour synchronous sessions over a 2-year period (2014–2015 and 2015–2016) on #NGSSChat, a Twitter professional network site (Rosenberg et al., 2020). Using a mixed-method and social network analysis, Rosenberg and Colleagues noted how different conversations (transactional, sensemaking, and transformational) pertaining to the Next Generation Science Standards (NGSS Lead States, 2013) were distributed among participants. They found that the participants represented diverse stakeholders from states that have and have not adopted the NGSS, noted that one-third of the participants were teachers, and found that teachers and administrators were more likely to maintain engagement over the second year compared to researchers and organizations. The study’s findings provide useful information for supporting engagement on dedicated professional social media sites to enhance their effectiveness in supporting reform efforts.

Other studies have focused on analyzing the effectiveness of Twitter as an educational tool to support teaching and learning. For example, a recent review of 103 Twitter studies found that integrating Twitter in classroom activities positively impacted teaching and learning goals and motivated learners to actively engage with the courses’ content (Malik et al., 2019). Innovative integration of Twitter in science education has been also used to support the attainment of specific educational goals. In one study, a Twitter discussion board was developed to support medical students’ learning during a difficult neuroanatomy module (Hennessy et al., 2016). In another study, Halpin (2016) used Twitter effectively in a biological science college freshman course for non-majors to support student use of reputable science websites. The intervention resulted in students shifting from relying mostly on google searches to relying on periodical searches, resulting in a tripling of the number of references to the Centers for Disease Control and Preventions website ( and a quadrupling in the use of science periodicals in Twitter-based discussions. Students not only went beyond the required number of posting tweets but also commented on the expansion of their understanding of scientific research and appreciation of new daily findings in science—a new realization that was accelerated through using the Twitter platform. These findings point to the role of the Twitter platform in facilitating engagement with cutting-edge science content enabling students to note aspects about the growth of scientific knowledge (an important component of NOS) that may have not been noted otherwise.

In sum, the reviewed studies illustrate the range of possibilities that Twitter offers as a research site for analyzing trends in a variety of fields and as a digital tool to augment learning. The next section presents the challenges involved in analyzing Twitter data from a NOS perspective and describes the logistics involved in executing this study.


This descriptive study uses Twitter to explore public views about the nature of science. Twitter as a social media platform provides a very rich source of data. According to Sayce (2020, para 1), “as of May 2020, every second, on average, around 6,000 tweets, or, 350,000 tweets sent per minute or, 500 million tweets sent each day or, 200 billion tweets per year.” This huge and unwieldy source of data can be approached from many methodological perspectives (see Sloan & Quan-Haase, 2016). The methods described in this section are aimed to answer the three research questions introduced earlier. Namely, when did mentions of “scientific methods” peak during the first five and half months of 2020, which aspects of NOS do these tweets address, and what can be learnt about the attitudes towards scientific method.

A key step to this endeavor is mining tweets for mentions of COVID during the designated time period, querying the dataset for specific keywords (“scientific method” or “scientific methods”), and then analyzing patterns in the data related to frequency, content, and attitudes. The following sub-sections describe data collection and analysis procedures (technical and conceptual) and ethical considerations.

Data Collection and Analysis

The data collection included almost 300 Million tweets related to COVID-19 crawled from January to June 2020. To maximize the coverage of the tweets, the data were collected through three different channels: (1) Twitter search API (Application Programming Interface), which makes it possible to search for historical relevant tweets; (2) Twitter stream API, which provides live sampling at most 1% of relevant tweets; and (3) a publicly available Twitter dataset related to COVID-19 (Huang et al., 2020). In general, this integrated data set provides a good sample of the published tweets that are relevant to COVID-19. With the integrated COVID-19 Twitter data, it is possible to perform multiple searches using the queries related to our focus and study the retrieved tweets. Given a query, in this case “scientific method(s),” all the tweets from the integrated data set are ranked based on their relevance.

The relevance score of a tweet with respect to a query is computed using Okapi BM25 (Robertson et al., 1994, Robertson and Zaragoza, 2009), a standard keyword matching based retrieval function that has been widely used by various search engines. Formally, given a query Q and a document D, the relevance score S(Q,D) computed as follows, where w represents a word occurring in both query and document, and df(w) is the number of documents in the collection containing w, c(w,D) is the occurrences of w in D, |D| is the number of words in D, N is the number of total documents in the collection, and avdl represents the average document length of all the documents in the collection. k and b are two parameters that can be tuned, and they are set to their default values, i.e., 1.2 and 0.5 respectively in our experiments. Intuitively, this retrieval function rewards tweets that match more occurrences of important query terms. The absolute values of a tweet are not that meaningful. Instead, the relative ranking of the tweets reflects their relevance with respect to the query.

The top 1000 retrieved tweets whose relevance scores ranged from 5.3 to 11 (11 being the most relevant) were analyzed The cut-off number of 1000 tweets was arbitrary, with the plan to retrieve additional tweets if warranted. As the relevance number of the tweets decreased (moving from the first to the thousandth tweet), more tweets were getting disqualified. For example, only a third of the tweets between 1 and 100 were disqualified compared to two-thirds of the tweets between 900 and 100. This dwindling in the number of eligible tweets, which dropped to zero in the last 25, signaled that data “saturation” was reached, affirming that additional data will not likely yield new insights/trends (Saunders et al., 2018).

Data Analysis Procedures

Once the COVID-19 tweets pertaining to scientific method(s) were extracted, the authors surveyed their content and discussed the viability of analyzing it using the NOS concepts described in the extensive account by Erduran and Dagher (2014). Since the FRA was originally developed to characterize science, the first two authors slightly adapted the detailed descriptions of these categories to enable their identification in public exchanges about science (Appendix, 2nd column). Such a process is necessary when applying the FRA lens to specific contexts such as textbook analysis (McDonald, 2017) or classroom teaching (Chaparian, 2020). What is different about this adaptation is its application outside the realm of formal education contexts, in a social media environment that is much more fluid, dynamic, and non-scripted. Naturally, the researchers examined the tweet’s content relative to its context (if it quoted another tweet) against the FRA categories. They do not claim knowledge of the users’ intentions as those are inaccessible to them. This would be comparable to analyses of science textbooks for certain features without necessarily accessing or verifying the authors’ intentions.

The first two authors reviewed a subset of 25 tweets independently using the FRA-based content analysis guide developed for this purpose. Then they met to compare their sets of coded tweets, discussed their justifications then worked on another set independently, and discussed it again to reconcile differences. They repeated this process one more time to accomplish two goals: to verify the analytical framework’s suitability for the task and adjust it as needed and to establish common procedures that guide the analysis of the full data set. After three rounds of piloting different parts of the data set, they scored every single tweet concurrently and interactively to ensure consensus on the categories they assigned to it resulting in higher confidence than would otherwise be the case. During the analysis, the researchers kept record of interesting aspects of the tweets and noted instances where tweets made independent references to the same article or event. At the conclusion of the analysis, the researchers held a debriefing session to summarize their impressions. Notes from this discussion were compiled and further developed in the discussion section of this paper.

Conceptual Aspects

Piloting the Framework With Different Sets of Tweets Led to Agreements Among the Researchers on the Conditions That Justify Disqualifying a Tweet, and General Guidelines for Conducting the Content Analysis in the Ways Described Next.

  1. I.

    A tweet was not considered for analysis if it:

    1. a.

      Does not contain the terms scientific method(s) or does not include these terms contiguously.

    2. b.

      Contains the terms but the content is irrelevant to COVID.

    3. c.

      Quotes another tweet that is not accessible to the researchers either because the quoted tweet is not in the English language, or because it is unavailable (deleted by user or platform). The first decision is justified on the basis that we found online language translators to fall short of depicting the tweets’ meaning accurately, and the second decision is justified because tweets that reference a tweet that is no longer available cannot be interpreted properly without access to the full context.

  2. II.

    Once deemed eligible for analysis, a tweet’s content was considered using the FRA conceptual framework. A brief description of the NOS categories and adaptation for tweet analysis is presented in the first and second columns of the Appendix. The categories resemble a short-hand reference to the NOS content descriptions. The researchers referred to the FRA’s full account (Erduran & Dagher, 2014) in their adaptation of the categories for this study, and while analyzing the tweets whenever the description fell short of providing adequate guidance. In this sense, the FRA categories are content-rich and not mere labels.

The following guidelines were established:

  1. a.

    No limit is set on the number of FRA categories that can be assigned to a given tweet.

  2. b.

    A tweet is not automatically assigned the scientific methods category unless it refers to specific aspects pertaining to the broad or lockstep senses of method referenced under this category in Table 1. This is necessary to distinguish the mention of the term from its content as defined in the analytical framework.

  3. c.

    Reference to sentiment was tracked for each tweet in anticipation of its potential utility in further contextualizing the results, and in light of a growing trend to study it in social media research (Feldman, 2013, Sani et al., 2013). The following sentiment categories were recorded: support, critique, both (supporting some aspect and critiquing another), neutral—and where clearly evident, satire.

Table 1 Paraphrased examples of qualified and disqualified (DQ) tweets

The process of allocating FRA categories and sentiments to the tweets in the data set required considerable judgement and inferences limited by two constraints: brevity of the tweet and lack of access to the users’ intended meaning. The consensus process by which data analysis was conducted helped maintain consistency in allocating FRA categories as the analysis progressed. The Appendix includes examples of masked tweets (for reasons explained in Sect. 4.3) tagged with a single category (3rd column) and justifications that convey how the researchers used the descriptors and questions (1st and 2nd columns) to determine the NOS content of the tweet.

Procedural and Technical Aspects

To facilitate the process of assigning content codes to the tweets, a data interface program was developed to activate a coding scheme that allows the researchers to record a given tweet’s content. The content included the FRA categories, sentiment, and month. Three scripts were created to handle and process the thousand banked tweets. The tweets were banked in a CSV (comma-separated values) file which was modified to only contain the links to each tweet. Two languages were used in the process of analyzing data, Python (2016, Version 3.6) and MATLAB (2019, R2019b; Python is a general-purpose high-level programming language, and MATLAB is an integrated programming and numeric computation environment. The first script, created in Python, would open tweets in the order they are indexed in the CSV file. This program allowed for efficiency in opening the tweets for viewing and tagging by the researchers. The second script, created in Python, was a GUI (Graphical User Interface) designed to track and store the tagged data. Upon clicking the submit button, the GUI stores the tweet number of the tweet being ranked in a spreadsheet where each category has its own column and increases the tweet number by 1. For example, if tweet #22 was tagged with, “Aims & Values,” “Methods,” “January,” and “Support,” then #22 would be appended to those respective columns. The GUI would automatically disregard further input for tweets that were in the Disqualified, Advertisement, Further Review, or Irrelevant, and those tweet numbers were placed in their respective columns.

The last script, created in MATLAB, created a second matrix which would count the number of associations in the FRA categories (e.g., Aims and values, Practices, Methods, and Knowledge). For example, if tweet numbers 12 and 13 were the only tweets both tagged with “Aims and values” and “Social certification and dissemination.” In the program, on the first pass, it would recognize that the association between “Aims and values” and “Social certification” has not been found in the matrix and would create a new row in the matrix where in the first column, it would place each categories number (numbered 1–12 for simplicity) and on the right would place 1, as at this point, the association was only found once. Upon iterating to tweet 13, it recognizes the association is already in the matrix by comparing the association value to each value in the first column, and then, it iterates the number found in the second column by one, to indicate that now two tweets have been found with this association.

Ethical Considerations

All Twitter users are required to agree to the Twitter User Terms of Service prior to creating an account on the site, therefore enabling them to tweet. The agreement states that Twitter has the right for the distribution, promotion, and publication among other things of any content posted on their forum. Twitter also retains the exclusive rights to make such data available to other companies, organizations, or individuals at its discretion. The avenue the researchers took to retain these rights was through the Twitter Developer licenses Terms of Service. Developers are expected to exercise caution in making inferences based on sensitive characteristics of users (such as political affiliation or beliefs and sexual orientation). Furthermore, Twitter Developers Terms contain specifications on how to display tweets that are being directly quoted.

Publicly available tweets are available to analyze and research—and for the most part, do not require additional participant consent. However, just because the data is public does not mean that it is exempt from ethical considerations (see the case reported by Zimmer, 2010). Relevant to this analysis, Ahmed et al. (2017) noted that “there are legal and ethical implications to using social media data posted by people who may have been sending tweets while in a vulnerable state of mind, e.g., during a disaster, or health outbreak” (p. 82). During these times in particular, individuals may not be aware that the information they shared data is being analyzed and reported. Also their tweet may subject them to social and political risks that may not have occurred to them or the researchers. Therefore, social media researchers are expected to respect user rights to anonymity and confidentiality (Beninger, 2016).

According to Williams et al. (2017), it is currently not standard practice to include tweets in academic studies due to the attachment of personal information such as username, possible legal name, personal profile picture, and geographical location to the tweet. Additionally, even if the identifying information is masked the tweets may still be accessible via search engines. Williams et al. (2017) also provided a risk assessment protocol for determining how researchers should use the data. Because of the interpretive level involved in the content analysis of the tweets, providing examples is important for illustrating how the FRA was applied. To avoid revealing personal information identities and actual tweets (which can be searchable), we paraphrased the sample tweets included in the Appendix (3rd column) to render them unsearchable. Doing so prevents risks of revealing Twitter user information and identification through searchability while also respecting their right to be forgotten (in case they delete their tweet beyond this publication).


Following the norms described earlier, of the 1000 tweets in the dataset, five were tagged as advertisements, 15 were tagged as irrelevant, and 386 tweets were disqualified. Naturally, all the tweets falling in these three categories were not tagged with a sentiment, month, or any FRA analytical framework category. The researchers report an error rate of 0.6% when rating the tweets. This figure is sourced from noted errors in user interaction with the GUI which created a total of 1006 ratings from the dataset of 1000 tweets.

The full set of tweets that were content-analyzed according to the FRA analytical framework constituted, 600 tweets constituted these tweets were respectively tagged with sentiment, a month, and as many content tags as necessary to classify its meaning. Of these 600 tweets, 0.3% were tweeted in January, 0.8% in February, 10.2% in March, 50.8% in April, 30.4% in May, and 7.5% in the first half of June. The distribution of these tweets, as shown in Fig. 2, peaked during the months of April and May before dropping off towards the end of the data collection period.

Fig. 2

Distribution of COVID tweets with scientific method mentions between January and mid-June 2020

Across the 600 tweets, 1102 tags were associated with this portion of the tweet set. In other words, tweets were often tagged with more than one of the 11 FRA categories. On average each tweet received 1.8 FRA categories. Figure 3 shows the distribution of categories (single and multiple) assigned to the tweets in the data set. The most frequently used tag was Knowledge which was associated with 34% of tweets. This was followed by Methods and methodological rules with 25% of tweets, Political power structures with 17% of tweets, Aims and values with 9% of tweets, Practices with 5% of tweets, Social certification with 4% of tweets, Scientific ethos with 3% of tweets, Financial systems with 2% of tweets, Social values with 0.5% of tweets, Social organizations and interactions with 0.4% tweets, and Professional activities with 0.1% of tweets. Note that tweets were often tagged with more than one of the 11 FRA analytical framework categories.

Fig. 3

Percentage distribution of FRA categories across tweets in the entire data set

The FRA framework contains four categories in the cognitive-epistemic dimension and seven categories in the social-institutional dimension. Exploring which categories were most referenced within each individual dimension gives a sense of the extent to which some were more present relative to others in the same dimension. The cognitive-epistemic aspect of the FRA framework includes four categories: Aims and values, Methods and methodological rules, Practices, and Knowledge. Figure 4 shows that Knowledge (47%) accounted for the majority of tweets within the cognitive-epistemic dimension. Methods and methodological rules accounted for the second-highest percentage of tags (33%) when compared internally to other cognitive-epistemic categories. Aims and values accounted for 13% of tags, and Practices accounted for 7% of total tags.

Fig. 4

Percentage distribution of cognitive-epistemic categories in the analyzed tweets

The social-institutional dimension of the FRA has seven categories: Social certification and dissemination, Professional activities, Scientific ethos, Social values, Financial systems, Social organizations and interactions, and Political power structures. Within this dimension, the categories were distributed as shown in Fig. 5. The category of Political power structures received 63% of tag allocation, while Social certification and dissemination accounted for 14%, followed by Scientific ethos which accounted for 12% and Financial systems at 8%. Much less common was reference to Social values (2%), Social organizations and interactions accounted (1%), and Professional activities (0.3%).

Fig. 5

Distribution of social-institutional categories in the analyzed tweets

A breakdown of FRA categories allocated to tweets within each of the dimensions simply amplifies existing trends. For example, in the cognitive-epistemic dimension, the categories of Knowledge and Methods and methodological rules accounted for 80% of assigned tweets, while in the social-institutional dimension, Political power structures and Social certification and dissemination accounted for 77% of the assigned categories. What is useful in considering the findings from the standpoint of each dimension is noting that some categories were more referenced than others. This variation could be incidental, driven by the focus of the events/interactions, or a limited awareness of these aspects of the nature of science. Noting variation in the data is useful for generating new questions, but explaining it goes beyond the parameters of this analysis.

Each tweet was tagged with as many categories as justified by its content which created a series of associations between frequently used tags. Table 2 lists the percentage of tweets assigned to single or multiple FRA categories. Those listed (above 1%), account for 74.4% of the tagged tweets in the dataset. Note that the listed combinations are exclusive, for example, Knowledge was exclusively assigned to 15.7% of the 600 tweets and in conjunction with other tags in 38.3% of the 600 rated tweets. To avoid confusion, Methods and methodological rules has been abbreviated in this section as “Methods.” The most commonly used exclusive category was Knowledge, being used 15.7% of the time; this was followed by Methods and Knowledge which were used in combination in 12.8% of tweets, and then by Methods used 8.8% times. Knowledge and Political power structures were used 8.3% times exclusively, Methods and Political power structures were used in 7.2% of tweets, and Aims and values and Knowledge were used 6.2% of the time. Methods, Knowledge, and Political power structures were sighted collectively in 4.8% of the rated tweets. Both Practices and the combination of Practices and Knowledge were sighted at 2.7% each. Aims and values, Methods, and Knowledge were collectively tagged 1.8% of the time. Aims and values and the combination of Aims and values, Knowledge, and Political power structures were each assigned to 1.7% of tweets.

Table 2 Percentage of tweets (above 1%) that were assigned individual and multiple FRA categories

Tweets receiving multiple categories are mapped onto the FRA Wheel (Erduran & Dagher, 2014, p. 28), providing a visual representation of what categories were associated with one another within and across the cognitive-epistemic and social institutional dimensions of science (Fig. 6). The two thicker bi-directional arrows represent combinations across categories that were relatively more frequent (8–15%) compared to those that were less frequent (1.7–8%). Knowledge is the most commonly referenced category accounting for seven of the eight different tag combinations represented on the FRA Wheel. Two of these were among the most frequently occurring associations (knowledge and methods; knowledge and political power structures). Also noteworthy is that the category of Political power structures appears in four of eight associations with other categories. The lopsidedness of where the connections appear show mostly connections among three of the cognitive epistemic categories and between those and one social institutional category. Single or multiple categories receiving percentages less than 1% were considered too infrequent to be worthy of inclusion.

Fig. 6

The arrows super-imposed on the FRA-Wheel indicate which FRA categories were associated with one another in individual tweets

Tweets were also tagged with a perceived sentiment towards the scientific method. Figure 7 depicts the distribution of sentiment types across rated tweets. The majority of the tweets (87%) were in support of the scientific method, a minority (5%) critiqued the scientific method, while 8% remained neutral in their discussion of the term. None of the tweets were tagged with “support and critique” sentiments.

Fig. 7

Percentage distribution of sentiments in tweets relative to users’ references to scientific method(s)

Finally, there were three widespread topics of discussion relating to the articles: He was a science star. Then he promoted a questionable cure for Covid-19, by Sayare (2020), Doubt is essential for sciencebut for politicians, it's a sign of weakness, by Al-Khalili (2020), and The scientific method can’t save us from the coronavirus: What we need is problem-solving — creativity, flexibility and teamwork, by Cowles (2020). These tweets and the pertinent articles were “flagged” as they were either shared frequently or directly referenced in discussion throughout multiple tweets from various users. We discuss these further in the next section.

In summary, this study reveals that the most frequent COVID-19 tweets involving scientific method between January and mid-June 2020 took place during the months of April and May. The analysis shows that the frequency of tweets assigned to single FRA categories ranged from high to low in the following order: scientific method, scientific knowledge, followed by practices and aims and values. Tweets that were tagged with multiple categories were more frequent than those that were tagged by single categories.


The findings provide information about how tweets related to COVID-19 expressed views pertaining to NOS using the phrase scientific method or scientific methods. These views often integrated multiple aspects of NOS. This section contextualizes the findings and discusses their relevance to science education.

The analysis performed in this study does not permit drawing causal explanations for why tweets about scientific method posted in the first half of 2020 reached their highest levels during the months of April and May. We do note, however, that during this time period, more communication about the virology of the disease was taking place. It was also the period during which open speculations about effective treatments were being discussed widely, announced by a handful of scientists, and promoted by politicians—possibly triggering more interest in or allusion to scientific method. While it is possible to chronicle the progression of COVID-19 events (e.g., World Health Organization, 2020), it is not reasonable to draw causal relationships between those events and the content of Twitter conversations.

The content of tweets in this data set touched on a range of NOS aspects as interpreted through the FRA lens. The majority of tweets (68%) addressed multiple NOS categories which is not surprising especially in the context of the socio-scientific issues precipitated by COVID-19. The category of Political power structures was the third-most tagged one in the data set following the categories of Knowledge and Methods (as noted in Fig. 3). This reference may have been triggered by the palpable tensions generated by the apparent slow rate of knowledge production related to the pandemic’s treatments and vaccines, and social and political pressures exerted to speed up the process to save lives and limit damage to the economy (caused by lockdowns). The vast majority of tweets that identified politicians were aimed at American President Donald Trump and British Prime Minister Boris Johnson, critiquing them for not following the scientific method, or for attempting to interfere with it and pressure scientists to change what they do-in effect undermining the scientific method. Several praised German Prime Minister Angela Merkel for her scientific background and ability to communicate scientific information with the general public. India’s Prime Minister Narendra Modi was noted in a positive light for his efforts to limit the transmission of COVID-19 in India. These tweets captured tensions between calls to maintain scientific rigor, methodological rules, and scientific ethos, and advocating for “the right to try” experimental treatments and the notion that urgent issues justify relaxing strict measures.

In the process of analyzing the dataset, we noted clusters of tweets and retweets related to three notable events. One event revolved around the controversial French doctor and virologist, Didier Raoult, who touted the effectiveness of chloroquine based on a clinical trial. Raoult is described as a self-willed pioneering scientist who has contributed greatly and been awarded by the scientific community (See Sayare, 2020 for a feature article). According to Sayare (2020), Dr. Raoult’s approach is shown as one that breaks many “conventional standards” of science. He sees himself as an outsider who, by going against the scientific establishment along with its traditional “methods,” has become one of the most successful microbiologists in France. He gained notoriety related to the pandemic when he urged governments to disregard stringent methodology and ultimately consent to physicians using the anti-malaria drug, hydroxychloroquine, as a treatment for COVID-19, a matter that was strongly repeated by President Trump in his daily press conferences. Although Dr. Raoult had been pursuing hydroxychloroquine as a cure since late-February, many media outlets began reporting on his work adding fuel to the fire. Much of this discussion revolved around the individual users’ definition of methodology and whether they believed that scientists should relax methodological rules in times of crisis. Many users who expressed a sentiment of supporting the scientific method, would critique its rigid structure and the “ivory tower” nature of its enquiry. Some used this article to justify proposed changes to drug efficacy testing rules to allow for more and larger experimental trials while others defended the current system for its effort to protect patients from treatments that have not been properly validated.

The second was related to a newspaper article entitled Doubt is essential for sciencebut for politicians, it's a sign of weakness written by the renowned physicist Jim Al-Khalili (2020). This article juxtaposed the values of science with the values of politics and some conspiracy theorists. Al-Khalili claims we face a public crisis of trust in science; the public is used to policy being created on a concrete foundation with little subject to change, but change and doubt exist at every level in the scientific process. The author also mentions how the defending of conspiracy theories misuses the same guise of doubt as science except those who defend them weaponize counterevidence, rather than consider that their own position may be incorrect. Al-Khalili mentioned that this is likely due to biases such as cognitive dissonance. This article generated a large amount of discussion on Twitter about Political power structures, Knowledge, and Methods. Much of the conversational sharing of this article was supportive of the scientific system although many users had differing opinions on what the “scientific method” is. Most references to this article alluded to political interference with science by way of questioning strict adherence to methodological rules or social certification given the pressures of the raging pandemic.

The third set of tweets were related to a Washington Post article entitled, The scientific method can’t save us from the coronavirus: What we need is problem-solving — creativity, flexibility and teamwork by Cowles (2020), a historian of modern science and medicine. This article refers to the lock-step version of the scientific method and how it originated from John Dewey’s depiction of method as a broadly conceived way of reasoning and problem solving that was taken out of context as detailed in Rudolph’s (2019) recent historical account. Cowles explained that the free-flowing, creative, and collaborative approach to problems which is not found in the lock-step scientific method is what we need to move our knowledge forward on the COVID-19 pandemic. He emphasized the social nature of scientific work and how it relies on the collective of scientists everywhere. He affirms that if “science saves us, though, it will be because it lacks a single method. The novel coronavirus causing the current crisis presents a multidimensional challenge — to personal, public, economic, and mental health. There is no single tool with which to confront such a threat; what we need is a vast tool kit.” As expected, the reaction to the article’s content on Twitter was a mix: some users supported the ideas the author proposed while others argued that he had a poor understanding of the scientific method.

The majority of the reviewed tweets was civil in tone, many commented on published articles such as those mentioned earlier, highlighted prevention methods, or posed questions to generate discussion. There was considerable discussion about the various models being used to track and predict the rate of the spread of the virus. Much of this discussion focused on the efficacy of lockdowns and speculated on how long the pandemic will take to run its course. Some tweets questioned the accuracy of presumed causes of in some COVID-19 death counts and how these determinations influence the data and the models. In this dataset, several tweets shared conspiracy theories concerning the involvement of 5G, Bill Gates, or a Wuhan Lab in the creation and propagation of the virus.

While the non-computational sentiment analysis was interesting, it was not particularly illuminating. In addition to the high level of inference involved in making a sentiment judgment from a researchers’ standpoint, the users’ perceived support or critique was not associated with a clearly discernable sense of method (restrictive or open). For example, tweets perceived to express support for scientific method were used to attack politicians for interfering with science, praised scientists for following it or chastised them for not following “their own method.” It was often noted that the same person (scientist or politician) would be praised for following scientific methods by one user and criticized for not following them by another in reference to the same issue. Given the brevity of Twitter posts, identifying which sense of method the tweet users were referring (lock-step model or a more sophisticated conception) could not be always discerned. The combined issues of high inference for sentiment analysis and lack of clarity about which sense of method is being used speak directly to one of the documented limitations encountered in analyzing sentiment computationally in social media posts, irrespective of the focus (see Zimbra et al., 2018).


The findings of this study are confined to English language tweets and to the time period during which the dataset was collected and queried, and cannot be generalized beyond that time frame. Another limitation pertains to the brevity of Twitter posts for content analysis, since inferences about the content are contextual (depending on what they quote) and given their brevity they lack the elaborate contextual information usually encountered in traditional data sources such as textbooks passages and participant interviews. Also, it is possible that some tweets may have referred to scientific method without explicitly using these exact terms or using them contiguously, which excluded them from the examined sample. Finally, access to tweets is rather volatile. The search tools used to crawl the internet collect the live links to these tweets and do not store the actual texts. This means that as users’ accounts are closed or suspended, or if users delete their tweets, this information is lost and cannot be revisited.



This study draws its significance from being the first to pave the way for examining public discourse on social media from a NOS perspective. We accomplished this by using methods for mining big data and using conceptual tools from NOS literature and science education to determine optimal key terms to query the data and generate a data set for detailed analysis. The study’s application of the FRA framework outside the realm of formal education (K-16) to the rather chaotic exchanges encountered on social media proved to be fruitful for capturing NOS trends in tweets related to COVID-19. Assigning FRA categories to tweets indicated that NOS content pertaining to these categories is present, but the quality of this content was not investigated, not because it is a limitation of the framework, but because of the limitation of the medium being too brief to allow for a deep level qualitative analysis. In navigating an uncharted territory, methodologically and conceptually, we were motivated by the conviction that the proliferation of 21st century communication tools demand expanding how we conceptualize and study NOS in social media contexts using rich frameworks in science education. Future studies on NOS in the public sphere can build on this study’s findings and explore new questions as the tools for mining and analyzing social media data continue to evolve. For example, studies in social media environments can examine the relative effectiveness of using different keywords for identifying NOS content, identify NOS content in relation to specific socioscientific issues such as climate change and Genetically Modified Organisms (GMOs), or analyze how NOS knowledge evolves within established learning communities (student or professional learning communities). Large-scale studies, like the one attempted in this study, have the advantage of exploring general trends but may not allow further scrutiny. Studies of dedicated chat sites permit the mapping of knowledge growth within prescribed social networks (as in Rosenberg et al., 2020 study), and allow for planned interventions, in-depth analysis, and follow-up.


The fact that many individual tweets contained reference to multiple FRA categories reflects the interrelationship between these NOS categories from an applied standpoint. When reviewing the Twitter dataset, it was evident that many references to NOS seem to be closely associated with some of the social-institutional structures that guide and shape the production of scientific knowledge. Specifically, the category of Political power structures, for example, was often noted in combination with other components of the framework (Knowledge, Methods and methodological rules, Knowledge and Methods and methodological rules, Aims and values). The researchers see the paramount significance of existing references to social institutional structures in the dataset which convey some level of public engagement with NOS.

Public Understanding of Science

The scientific and medical urgency imposed by the COVID-19 has prompted a stronger need to understand the disease and NOS—it also exposed the need to have a multi-pronged and a multi-disciplinary approach to improving functional scientific literacy in general and in the context of pandemics in particular. At some level, the impact of public health decisions on personal freedoms and economic activity has caused individuals, communities, and politicians to question science and its methods. Thoughtfully crafted articles for the general public by scientists such as Al-Khalili (2020) and historians such as Cowles (2020) originally published in traditional media outlets and shared on social media can contribute to raising awareness about how science works and make timely contributions to public understanding of NOS. This is because the understanding of the complexity of science and scientific activity in the context COVID-19 is not straightforward but requires placing science in its broader cognitive, epistemic, social, institutional and political context—both in and outside the classroom. By joining forces with colleagues in other disciplines and engaging strategically with mainstream and social media platforms, science educators can play a vital role in promoting NOS awareness beyond the walls of the classroom while using the public pulse to inform school-based NOS goals and priorities.


  1. Abd-El-Khalick, F. (2014). The evolving landscape related to assessment of NOS. In N. G. Lederman, & S. K. Abell (Eds.), Handbook of research on science education (vol. II, pp. 621–650). Routledge.

  2. Ahmed, W. (2019). Using Twitter as a data source: An overview of social media research tools. Retrieved from

  3. Ahmed, W., Bath, P., & Demartini, G. (2017). Using Twitter as a data source: An overview of ethical, legal, and methodological challenges. In K. Woodfield (Ed.), The ethics of online research. (pp. 79–107). Emerald.

  4. Al-Khalili, J. (2020). Doubt is essential for science – but for politicians, it’s a sign of weakness. The Guardian. Retrieved from

  5. Allchin, D. (2004). Pseudohistory and pseudoscience. Science Education, 13, 179–195.

    Article  Google Scholar 

  6. Allchin, D. (2020). The COVID-19 conundrum. The American Biology Teacher, 82(6), 1–5.

    Article  Google Scholar 

  7. American Association for the Advancement of Science. (1989). Science for all Americans. American Association for the Advancement of Science.

  8. Beninger, K. (2016). Social media users’ views on the ethics of social media research. In L. Sloan & A. Quan-Haase (Eds.), The SAGE handbook of social media research methods. (pp. 57–73). Sage.

    Chapter  Google Scholar 

  9. Berson, I. R., & Berson, M.J. (2003). Digital literacy for effective citizenship. (Advancing Technology). Social Education, 67(3), p. 164+. Gale Academic OneFile, Accessed October 20, 2020 from,

  10. Blachowicz, J. (2009). How science textbooks treat scientific method: A philosopher’s perspective. The British Journal for the Philosophy of Science, 60(2), 303–344.

    Article  Google Scholar 

  11. BouJaoude, S., Dagher, Z., & Refai, S. (2017). The portrayal of nature of science in Lebanese 9th grade science textbooks. In C. McDonald & F. Abd-El-Khalick (Eds.), Representations of nature of science in school science textbooks – A global perspective. (pp. 79–97). Routledge.

    Chapter  Google Scholar 

  12. Boursier, V., Gioia, F., Musetti, A., & Schimmenti, A. (2020). Facing loneliness and anxiety during the COVID-19 isolation: The role of excessive social media use in a sample of Italian adults. Frontiers in Psychology.

  13. Brothers, W. (2020). A timeline of COVID vaccine development. Biospace.

  14. Chamary, J.V. (2021). The strange story of Remdesivir, A Covid drug that doesn’t work. Forbes.

  15. Chaparian, S. (2020). Changes in grade 7 learners’ NOS understandings and argumentation skills after engaging in reflective discussions following alternative information evaluation in the context of socio-scientific controversial issues. Unpublished master’s thesis. American University of Beirut, Beirut, Lebanon.

  16. Cofré, H., Núñez, P., Santibáñez, D., Pavez, J., Valencia, M., & Vergara, C. (2019). A critical review of students’ and teachers’ understanding of nature of science. Science & Education, 28, 205–248.

    Article  Google Scholar 

  17. Collins, F. (2020). Wash your hand, people! Video tweet retrieved from

  18. Cowles, H. M. (2020). The scientific method can't save us from the coronavirus: What we need is problem-solving—creativity, flexibility and teamwork. The Washington Post. Retrieved from

  19. Dagher, Z., & BouJaoude, S. (2005). Students’ perceptions of the nature of evolutionary theory. Science Education, 89, 378–391.

    Article  Google Scholar 

  20. Deng, F., Chen, D., Tsai, C., & Chai, C. (2011). Students’ views of the nature of science: A critical review of research. Science Education, 95, 961–999.

    Article  Google Scholar 

  21. Dewey, J. (1910). How we think. D.C. Heath.

    Book  Google Scholar 

  22. Dos Santos, W. G. (2020). Natural history of COVID-19 and current knowledge on treatment therapeutic options. Biomedecine & Pharmacotherapie, 129, 110493.

    Article  Google Scholar 

  23. Erduran, S., & Dagher, Z. (2014). Reconceptualizing the nature of science for science education: Scientific knowledge, practices and other family categories. Springer.

    Book  Google Scholar 

  24. Feldman, R. (2013) Techniques and applications for sentiment analysis. Communications of the ACM, 56(4), 82–89.

  25. First Versions. (n.d.). Twitter. Retrieved October 31, 2020, from

  26. Forsey, C. (2020). Twitter, Facebook, or Instagram? Which platform(s) you should be on. Retrieved from's%20important%20to%20consider%20the,the%20ages%20of%2018%2D29.

  27. Gruzd, A., & Mai, P. (2020). Going viral: How a single tweet spawned a COVID-19 conspiracy theory on Twitter. Big Data & Society, 7(2), 205395172093840.

    Article  Google Scholar 

  28. Gupta, S. (2020). Video tweet. Retrieved from

  29. Halpin, P. A. (2016). Research and teaching: Using Twitter in a non-science major science class increases students’ use of reputable science sources in class discussions. Journal of College Science Teaching, 45(6), 71–77.

    Article  Google Scholar 

  30. Hennessy, C. M., Kirkpatrick, E., Smith, C. F., & Border, S. (2016). Social media and anatomy education: Using Twitter to enhance the student learning experience in anatomy. Anatomical Sciences Education, 9(6), 505–515.

    Article  Google Scholar 

  31. Huang, X., Jamison, A., Broniatowski, D., Quinn, S., & Dredze, M. (2020). Coronavirus Twitter Data: A collection of COVID-19 tweets with automated annotations. Retrieved from

  32. Irzik, G., & Nola, R. (2014). New directions for nature of science research. In M. Matthews (Ed.), International handbook of research in history, philosophy and science teaching. (pp. 999–1021). Springer.

    Chapter  Google Scholar 

  33. Jingnan, H. (2020). The Coronavirus crisis: Why there are so many different guidelines for face masks for the public. NPR.

  34. Lamsal, R. (2020). Coronavirus (COVID-19) Tweets Dataset. IEEEDataport™.

  35. Lederman, N. G., Abd-El-Khalick, F., Bell, R. L., & Schwartz, R. S. (2002). Views of nature of science questionnaire: Towards valid and meaningful assessment of learners’ conceptions of the nature of science. Journal of Research in Science Teaching, 39(6), 497–521.

    Article  Google Scholar 

  36. Lederman, N. G., & Lederman, J. S. (2014). Research on teaching and learning of nature of science. In N. G. Lederman, & S. K. Abell (Eds.), Handbook of research on science education (vol. II, pp. 600–620). Routledge.

  37. Lopez, C. E., Vasu, M., & Gallemore, C. (2020). Understanding the perception of COVID-19 policies by mining a multilanguage Twitter dataset. Cornell University. Retrieved from

  38. Malik, A., Heyman-Schrum, C., & Johri, A. (2019). Use of Twitter across education settings: A review of the literature. International Journal of Educational Technology in Higher Education, 16, 36.

    Article  Google Scholar 

  39. McComas, W. F. (1996). Ten myths of science: Reexamining what we think we know about the NOS. School Science & Mathematics, 96(1), 10–16.

    Article  Google Scholar 

  40. McDonald, C. (2017). Exploring representations of nature of science in Australian junior secondary school science textbooks: A case study of genetics. In C. McDonald & F. Abd-El-Khalick (Eds.), Representations of nature of science in school science textbooks: A global perspective. (pp. 98–117). Routledge.

    Chapter  Google Scholar 

  41. McDonald, C., & Abd-El-Khalick, F. (Eds.). (2017). Representations of NOS in school science textbooks: A global perspective. Routledge.

    Google Scholar 

  42. National Research Council. (1996). National science education standards. National Academies Press.

    Google Scholar 

  43. NGSS Lead States. (2013). Next generation science standards: For states, by states. The National Academies Press.

    Google Scholar 

  44. Pacheco, E. (2020). COVID-19's impact on social media usage. The Brandon Agency. Retrieved from

  45. Park, H. W., Park, S., & Chong, M. (2020). Conversations and medical news frames on Twitter: Infodemiological study on COVID-19 in South Korea. Journal of Medical Internet Research, 22(5).

  46. Robertson, S. E., Walker, Jones, S., Hancock-Beaulieu, M., & Gatford, M. (November 1994). Okapi at TREC-3. Proceedings of the Third Text Retrieval Conference (TREC). Gaithersburg, USA.

  47. Robertson S. E., & Zaragoza, H. (2009). The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3(4), 333–389. CiteSeerX

  48. Robinson, R. (2020). The 7 top social media sites you need to care about in 2020. Retrieved from

  49. Rosenberg, J. M., Reid, J. W., Dyer, E. B., Koehler, M. J., Fischer, C., & McKenna, T. J. (2020). Idle chatter or compelling conversation? The potential of the social media-based #NGSSchat network for supporting science education reform efforts. Journal of Research in Science Teaching, 57(9), 1322–1355.

    Article  Google Scholar 

  50. Rudolph, J. (2019). How we teach science: What’s changed and why it matters? Harvard University Press.

    Book  Google Scholar 

  51. Sani S., Wiratunga N., Massie S., & Lothian R. (2013). Sentiment classification using supervised sub-spacing. In, M. Bramer & M. Petridis (eds), Research and development in intelligent systems XXX (pp. 109–122). SGAI 2013, Springer.

  52. Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality & quantity, 52(4), 1893–1907.

    Article  Google Scholar 

  53. Sayare, S. (2020). He was a science star. Then he promoted a questionable cure for Covid-19. The New York Times Magazine. Retrieved from

  54. Sayce, D. (2020). The number of tweets per day in 2020. Retrieved on October 29 from

  55. Sharma, K., Seo, S., Meng, C., Rambhatla, S., & Liu, Y. (2020). COVID-19 on social media: Analyzing misinformation in Twitter conversations. [Preprint]. Retrieved from arXiv:2003.12309v4[cs.SI].

  56. Sinatra, G. M., & Hofer, B. K. (2016). Public understanding of science: Policy and educational implications. Policy Insights from the Behavioral and Brain Sciences, 3(2), 245–253.

    Article  Google Scholar 

  57. Sloan, L., & Quan-Hasse, A. (2016). The SAGE handbook of social media research methods. Sage.

  58. Thurs, D. P. (2015). Myth 26: That the scientific method accurately reflects what scientists do. In R. L. Numbers & K. Kampourakis (Eds.), Newton’s apple and other myths about science. (pp. 210–218). Harvard University Press.

    Google Scholar 

  59. Weisberg, D. S., Landrum, A. R., Hamilton, J., & Weisberg, M. (2020). Knowledge about the nature of science increases public acceptance of science regardless of identity factors. Public Understanding of Science. Published on line (December).

  60. Williams, M. L., Burnap, P., & Sloan, L. (2017). Towards an ethical framework for publishing Twitter data in social research: Taking into account users’ views, online context and algorithmic estimation. Sociology, 51(6), 1149–1168.

    Article  Google Scholar 

  61. Woodcock, B. A. (2014). The “scientific method” as myth and ideal. Science & Education, 23(10), 2069–2093.

  62. World Health Organization. (2020). Rolling updated on coronavirus disease COVID-19: Updated July 21, 2020. Retrieved from

  63. Zimbra, D., Abbassi, A., Zeng, D., & Chen, H. (2018). The state-of-the-art in Twitter sentiment analysis: A review and benchmark evaluation. ACM Transactions on Management Information Systems. Article #5 retrieved from

  64. Zimmer, M. (2010). “But the data is already public”: On the ethics of research in Facebook. Ethics Information Technology, 12, 313–325.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Zoubeida R. Dagher.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Description of each FRA category, guiding questions for analyzing tweets, example tweets, and justification. For tweets involving a response to another user, the analysis focuses on the content tweeted by User B (queried tweet), in response to User A (quoted tweet)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bichara, D.B., Dagher, Z.R. & Fang, H. What do COVID-19 Tweets Reveal about Public Engagement with Nature of Science?. Sci & Educ (2021).

Download citation