1 Introduction

1.1 Information disorder in social media

Rumors, misinformation, disinformation, and mal-information are common challenges confronting media of all types. It is, however, worse in the case of digital media, especially on social media platforms. Ease of access and use, speed of information diffusion, and difficulty in correcting false information make control of undesirable information a horrid task [1]. Alongside these challenges, social media has also been highly influential in spreading timely and useful information. For example, the recent #BlackLivesMatter movement was enabled by social media, which united concurring people's solidarity across the world when George Floyd was killed due to police brutality, and so are 2011 Arab spring in the Middle East and the 2017 #MeToo movement against sexual harassments and abuse [2, 3]. Although, scholars have addressed information disorder in social media, a synthesis of the insights from these studies are rare.

The information which is fake or misleading and spreads unintentionally is known as misinformation [4]. Prior research on misinformation in social media has highlighted various characteristics of misinformation and interventions thereof in different contexts. The issue of misinformation has become dominant with the rise of social media, attracting scholarly attention, particularly after the 2016 USA Presidential election, when misinformation apparently influenced the election results [5]. The word 'misinformation' was listed as one of the global risks by the World Economic Forum [6]. A similar term that is popular and confusing along with misinformation is 'disinformation'. It is defined as the information that is fake or misleading, and unlike misinformation, spreads intentionally. Disinformation campaigns are often seen in a political context where state actors create them for political gains. In India, during the initial stage of COVID-19, there was reportedly a surge in fake news linking the virus outbreak to a particular religious group. This disinformation spread gained media attention as it was widely shared on social media platforms. As a result of the targeting, it eventually translated into physical violence and discriminatory treatment against members of the community in some of the Indian states [7]. 'Rumors' and 'fake news' are similar terms related to misinformation. 'Rumors' are unverified information or statements circulated with uncertainty, and 'fake news' is the misinformation that is distributed in an official news format. Source ambiguity, personal involvement, confirmation bias, and social ties are some of the rumor-causing factors. Yet another related term, mal-information, is accurate information that is used in different contexts to spread hatred or abuse of a person or a particular group. Our review focuses on misinformation that is spread through social media platforms. The words 'rumor', and 'misinformation' are used interchangeably in this paper. Further, we identify factors that cause misinformation based on a systematic review of prior studies.

Ours is one of the early attempts to review social media research on misinformation. This review focuses on three sensitive domains of disaster, health, and politics, setting three objectives: (a) to analyze previous studies to understand the impact of misinformation on the three domains (b) to identify theoretical perspectives used to examine the spread of misinformation on social media and (c) to develop a framework to study key concepts and their inter-relationships emerging from prior studies. We identified these specific areas as the impact of misinformation with regards to both speed of spread and scale of influence are high and detrimental to the public and governments. To the best of our knowledge, the review of the literature on social media misinformation themes are relatively scanty. This review contributes to an emerging body of knowledge in Data Science and informs the efforts to combat social media misinformation. Data Science is an interdisciplinary area which incorporates different areas like statistics, management, and sociology to study the data and create knowledge out of data [8]. This review will also inform future studies that aim to evaluate and compare patterns of misinformation on sensitive themes of social relevance, such as disaster, health, and politics.

The paper is structured as follows. The first section introduces misinformation in social media context. In Sect. 2, we provide a brief overview of prior research works on misinformation and social media. Section 3 describes the research methodology, which includes details of the literature search and selection process. Section 4 discusses the analysis of spread of misinformation on social media based on three themes- disaster, health, and politics and the review findings. This includes current state of research, theoretical foundations, determinants of misinformation in social media platforms, and strategies to control the spread of misinformation. Section 5 concludes with the implications and limitations of the paper.

2 Social media and spread of misinformation

Misinformation arises in uncertain contexts when people are confronted with a scarcity of information they need. During unforeseen circumstances, the affected individual or community experiences nervousness or anxiety. Anxiety is one of the primary reasons behind the spread of misinformation. To overcome this tension, people tend to gather information from sources such as mainstream media and official government social media handles to verify the information they have received. When they fail to receive information from official sources, they collect related information from their peer circles or other informal sources, which would help them to control social tension [9]. Furthermore, in an emergency context, misinformation helps community members to reach a common understanding of the uncertain situation.

2.1 The echo chamber of social media

Social media has increasingly grown in power and influence and has acted as a medium to accelerate sociopolitical movements. Network effects enhance participation in social media platforms which in turn spread information (good or bad) at a faster pace compared to traditional media. Furthermore, due to a massive surge in online content consumption primarily through social media both business organizations and political parties have begun to share content that are ambiguous or fake to influence online users and their decisions for financial and political gains [9, 10]. On the other hand, people often approach social media with a hedonic mindset, which reduces their tendency to verify the information they receive [9]. Repetitive exposure to contents that coincides with their pre-existing beliefs, increases believability and shareability of content. This process known as the echo-chamber effect [11] is fueled by confirmation bias. Confirmation bias is the tendency of the person to support information that reinforces pre-existing beliefs and neglect opposing perspectives and viewpoints other than their own.

Platforms’ structure and algorithms also have an essential role in spreading misinformation. Tiwana et al. [12] have defined platform architecture as ‘a conceptual blueprint that describes how the ecosystem is partitioned into a relatively stable platform and a complementary set of modules that are encouraged to vary, and the design rules binding on both’. Business models of these platforms are based upon maximizing user engagement. For example, in the case of Facebook or Twitter, user feed is based on their existing belief or preferences. User feeds provide users with similar content that matches their existing beliefs, thus contributing to the echo chamber effect.

Platform architecture makes the transmission and retransmission of misinformation easier [12, 13]. For instance, WhatsApp has a one-touch forward option that enables users to forward messages simultaneously to multiple users. Earlier, a WhatsApp user could forward a message to 250 groups or users at a time, which as a measure for controlling the spread of misinformation was limited to five members in 2019. WhatsApp claimed that globally this restriction reduced message forwarding by 25% [14]. Apart from platform politics, users also have an essential role in creating or distributing misinformation. In a disaster context, people tend to share misinformation based on their subjective feeling [15].

Misinformation has the power to influence the decisions of its audience. It can change a citizen's approach toward a topic or a subject. The anti-vaccine movement on Twitter during the 2015 measles (highly communicable disease) outbreak in Disneyland, California, serves as a good example. The movement created conspiracy theories and mistrust on the State, which increased vaccine refusal rate [16]. Misinformation could even influence election of governments by manipulating citizens’ political attitudes as seen in the 2016 USA and 2017 French elections [17]. Of late, people rely heavily on Twitter and Facebook to collect the latest happenings from mainstream media [18].

Combating misinformation in social media has been a challenging task for governments in several countries. When social media influences elections [17] and health campaigns (like vaccination), governments and international agencies demand social media owners to take necessary actions to combat misinformation [13, 15]. Platforms began to regulate bots that were used to spread misinformation. Facebook announced the filtering of their algorithms to combat misinformation, down-ranking the post flagged by their fact-checkers which will reduce the popularity of the post or page. [17]. However, misinformation has become a complicated issue due to the growth of new users and the emergence of new social media platforms. Jang et al. [19] have suggested two approaches other than governmental regulation to control misinformation literary and corrective. The literary approach proposes educating users to increase their cognitive ability to differentiate misinformation from the information. The corrective approach provides more fact-checking facilities for users. Warnings would be provided against potentially fabricated content based on crowdsourcing. Both approaches have limitations; the literary approach attracted criticism as it transfers responsibility for the spread of misinformation to citizens. The corrective approach will only have a limited impact as the volume of fabricated content escalates [19,20,21].

An overview of the literature on misinformation reveals that most investigations focus on examining the methods to combat misinformation. Social media platforms are still discovering new tools and techniques to mitigate misinformation from their platforms, this calls for a research to understand their strategies.

3 Review method

This research followed a systematic literature review process. The study employed a structured approach based on Webster’s Guidelines [22] to identify relevant literature on the spread of misinformation. These guidelines helped in maintaining a quality standard while selecting the literature for review. The initial stage of the study involved exploring research papers from relevant databases to understand the volumes and availability of research articles. We extended the literature search to interdisciplinary databases too. We gathered articles from Web of Science, ACM digital library, AIS electronic library, EBSCO host business source premier, ScienceDirect, Scopus, and Springer link. Apart from this, a manual search was performed in Information Systems (IS) scholars' basket of journals [23] to ensure we did not miss any articles from these journals. We have also preferred articles that have Data Science and Information Systems background. The systematic review process began with keyword search using predefined keywords (Fig. 2). We identified related synonyms such as 'misinformation', 'rumors', 'spread', and 'social media' along with their combinations for the search process. The keyword search was on the title, abstract, and on the list of keywords. The literature search was conducted in the month of April 2020. Later, we revisited the literature in December 2021 to include latest publications from 2020 to 2021.

It was observed that scholarly discussion about ‘misinformation and social media’ began to appear in research after 2008. Later in 2010, the topic gained more attention when Twitter bots were used or spreading fake news on the replacement of a USA Senator [24]. Hate campaigns and fake follower activities were simultaneously growing during that period. As evident from Fig. 1, showing number of articles published between 2005 and 2021 on misinformation in three databases: Scopus, Springer, and EBSCO, academic engagement on misinformation seems to have gained more impetus after the 2016 US Presidential election, when social media platforms had apparently influenced the election [20].

Fig. 1
figure 1

Articles published on misinformation during 2005–2021 (Databases; Scopus, Springer, and EBSCO)

As Data Science is an interdisciplinary field, the focus of our literature review goes beyond disciplinary boundaries. In particular, we focused on the three domains of disaster, health, and politics. This thematic focus of our review has two underlying reasons (a) the impact of misinformation through social media is sporadic and has the most damaging effects in these three domains and (b) our selection criteria in systematic review finally resulted in research papers that related to these three domains. This review has excluded platforms that are designed for professional and business users such as LinkedIn and Behance. A rational for the choice of these themes are discussed in the next section.

3.1 Inclusion–exclusion criteria

Figure 2 depicts the systematic review process followed in this study. In our preliminary search, 2148 records were retrieved from databases—all those articles were gathered onto a spreadsheet, which was manually cross-checked with the journals linked to the articles. Studies published during 2005–2021, studies published in English language, articles published from peer-reviewed journals, journals rating and papers relevant to misinformation were used as the inclusion criteria. We have excluded reviews, thesis, dissertations, and editorials; and articles on misinformation that are not akin to social media. To fetch the best from these articles, we selected articles that were from top journals, rated above three according to ABS rating and A*, A, and B according to ABDC rating. This process, while ensuring the quality of papers, also effectively shortened purview of study to 643 articles of acceptable quality. We have not performed track-back and track-forward on references. During this process, duplicate records were also identified and removed. Further screening of articles based on the title, abstract, and full text (wherever necessary)—brought down the number to 207 articles.

Fig. 2
figure 2

Systematic literature review process

Further screening based on the three themes reduced the focus to 89 articles. We conducted a full-text analysis of these 89 articles. We further excluded articles that had not considered misinformation as a central theme and finally arrived at 28 articles for detailed review (Table 1).

Table 1 Reviewed articles

The selected studies used a variety of research methods to examine the misinformation on social media. Experimentation and text mining of tweets emerged as the most frequent research methods; there were 11 studies that used experimental methods, and eight used Twitter data analyses. Apart from these, there were three survey methods, two mixed methods, and case study methods each, and one opportunistic sampling and exploratory study each. The selected literature for review includes nine articles on disaster, eight on healthcare, and eleven from politics. We preferred papers for review based on three major social media platforms; Twitter, Facebook, and WhatsApp. These are the three social media owners with the highest transmission rates and most active users [25] and most likely platforms for misinformation propagation.

3.2 Coding procedure

Initially both the authors have manually coded the articles individually by reading full text of each article and then identified the three themes; disaster, health, and politics. We used an inductive coding approach to derive codes from the data. The intercoder reliability rate between the authors were 82.1%. Disagreement among authors related to deciding in which theme few papers fall under were discussed and a resolution was arrived at. Later we used NVIVO, a qualitative data analysis software, to analyze unstructured data to encode and categorize the themes from the articles. The codes emerged from the articles were categorized into sub-themes and later attached to the main themes; disaster, health, and politics. NVIVO produced a rank list of codes based on frequency of occurrence (“Appendix”). An intercoder reliability check was completed for the data by an external research scholar having a different areas of expertise to ensure reliability. The coder agreed upon 26 articles out of 28 (92.8%), which indicated a high level intercoder reliability [49]. The independent researcher’s disagreement about the code for two authors was discussed between the authors and the research scholar and a consensus was arrived at.

4 Results

We initially reviewed articles separately from the categories of disaster, health, and politics. We first provide emergent issues that cut across these themes.

4.1 Social media misinformation research

Disaster, health, and politics emerged as the three domains (“Appendix”) where misinformation can cause severe harm, often leading to casualties or even irreversible effects. The mitigation of these effects can also demand substantial financial or human resources burden considering the scale of effect and risk of spreading negative information to the public altogether. All these areas are sensitive in nature. Further, disaster, health, and politics have gained the attention of researchers and governments as the challenges of misinformation confronting these domains are rampant. Besides sensitivity, misinformation in these areas has higher potential to exacerbate the existing crisis in society. During the 2020 Munich security conference, WHO’s Director-General noted: “We are not just fighting an epidemic; we are fighting an infodemic”, referring to the faster spread of COVID-19 misinformation than the virus [50].

More than 6000 people were hospitalized due to COVID-19 related misinformation in the first three months of 2020 [51]. As COVID-19 vaccination began, one of the popular myths was that Bill Gates wanted to use vaccines to embed microchips in people to track them and this created vaccine hesitancy among the citizens [52]. These reports show the severity of the spread of misinformation and how misinformation can aggravate a public health crisis.

4.2 Misinformation during disaster

In the context of emergency situations (unforeseen circumstances), the credibility of social media information has often been questioned [11]. When a crisis occurs, affected communities often experience a lack of localized information needed for them to make emergency decisions. This accelerates the spread of misinformation as people tend to fill this information gap with misinformation or 'improvised news' [9, 24, 25]. The broadcasting power of social media and re-sharing of misinformation could weaken and slow down rescue operations [24, 25]. As the local people have more access to the disaster area, they become immediate reporters of a crisis through social media. Mainstream media comes into picture only later. However, recent incidents reveals that voluntary reporting of this kind has begun to affect rescue operations negatively as it often acts as a collective rumor mill [9], which propagates misinformation. During the 2018 floods in the South-Indian state of Kerala a fake video on Mullaperiyar Dam leakage created unnecessary panic among the citizens, thus negatively impacting the rescue operations [53]. Information from mainstream media is relatively more reliable as they have traditional gatekeepers such as peer reviewers and editors who cross-check the information source before publication. Chua et al. [28] found that a major chunk of corrective tweets were retweeted from mainstream news media, thus mainstream media is considered as a preferred rumor correction channel, where they attempt to correct misinformation with the right information.

4.2.1 Characterizing disaster misinformation

Oh et al. [9] studied citizen-driven information processing based on three social crises using rumor theory. The main characteristic of a crisis is the complexity of information processing and sharing [9, 24]. A task is considered complex when characterized by increase in information load, information diversity or rate of information change [54]. Information overload and information dearth are the two grave concerns that interrupt the communication between the affected community and a rescue team. Information overload, where too many enquiries and fake news distract a response team, slows them down to recognize valid information [9, 27]. According to Balan and Mathew [55] information overload occurs when volume of information such as complexity of words and multiple languages that exceeds and cannot be processed by a human being. Here information dearth in our context is the lack of localized information that is supposed to help the affected community to make emergency decisions. When the official government communication channels or mainstream media cannot fulfill citizen's needs, they resort to information from their social media peers [9, 27, 29].

In a social crisis context, Tamotsu Shibutani [56] defines rumoring as collective sharing and exchange of information, which helps the community members to reach a common understanding about the crisis situation [30]. This mechanism works in social media, which creates information dearth and information overload. Anxiety, information ambiguity (source ambiguity and content ambiguity), personal involvement, and social ties are the rumor-causing variables in a crisis context [9, 27]. In general, anxiety is a negative feeling caused by distress or stressful situation, which fabricates or produces adverse outcomes [57]. In the context of a crisis or emergency, a community may experience anxiety in the absence of reliable information or in other cases when confronted with overload of information, making it difficult to take appropriate decisions. Under such circumstances, people may tend to rely on rumors as a primary source of information. The influence level of anxiety is higher during a community crisis than during a business crisis [9]. However, anxiety, as an attribute, varies based on the nature of platforms. For example, Oh et al. [9] found that the Twitter community do not fall into social pressure as like WhatsApp community [30]. Simon et al. [30] developed a model of rumor retransmission on social media and identified information ambiguity, anxiety and personal involvement as motives for rumormongering. Attractiveness is another rumor-causing variable. It occurs when aesthetically appealing visual aids or designs capture a receiver’s attention. Here believability matters more than the content’s reliability or the truth of the information received.

The second stage of the spread of misinformation is misinformation retransmission. Apart from the rumor-causing variables that are reported in Oh et al. [9], Liu et al. [13] found senders credibility and attractiveness as significant variables related to misinformation retransmission. Personal involvement and content ambiguity can also affect misinformation transmission [13]. Abdullah et al. [25] explored retweeter's motive on the Twitter platform to spread disaster information. Content relevance, early information [27, 31], trustworthiness of the content, emotional influence [30], retweet count, pro-social behavior (altruistic behavior among the citizens during the crisis), and the need to inform their circle are the factors that drive users’ retweet [25]. Lee et al. [26] have also examined the impact of Twitter features on message diffusion based on the 2013 Boston marathon tragedy. The study reported that during crisis events (especially during disasters), a tweet that has less reaction time (time between the crisis and initial tweet) and had higher impact than other tweets. This shows that to an extent, misinformation can be controlled if officials could communicate at the early stage of a crisis [27]. Liu et al. [13] showed that tweets with hashtags influence spread of misinformation. Further, Lee et al. [26] found that tweets with no hashtags had more influence due to contextual differences. For instance, usage of hashtags for marketing or advertising has a positive impact, while in the case of disaster or emergency situations, usage of hashtags (as in case of Twitter) has a negative impact. Messages with no hashtag get widely diffused when compared to messages with the hashtag [26].

Oh et al. [15] explored the behavioral aspects of social media participants that led to retransmission and spread of misinformation. They found that when people believe a threatening piece of misinformation they received, they are more likely to spread it, and they take necessary safety measures (sometimes even extreme actions). Repetition of the same misinformation from different sources also makes it more believable [28]. However, when they realize the received information was false they were less likely to share it with others [13, 26]. The characteristics of the platform used to deliver the misinformation also matters. For instance, numbers of likes and shares of the information increases the believability of the social media post [47].

In summary, we found that platform architecture also has an essential role in spreading and believability of misinformation. While conducting this systematic literature review, we observed that more studies on disaster and misinformation are based on the Twitter platform. The six papers out of nine that we reviewed on disaster area were based on the Twitter platform. When a message was delivered in video format, it had a higher impact compared to audio or text messages. If the message had a religious or cultural narrative, it led to behavioral action (danger control response) [15]. Users were more likely to spread misinformation through WhatsApp than Twitter. It was difficult to find the source of shared information on WhatsApp [30].

4.3 Misinformation related to healthcare

From our review, we found two systematic literature reviews that discusses health-related misinformation on social media. Yang et al. [58] explores the characteristics, impact and influences of health misinformation on social media. Wang et al. [59] addresses health misinformation related to vaccines and infectious diseases. This review shows that health-related misinformation, especially on M.M.R. vaccine and autism are largely spreading on social media and the government is unable to control it.

The spread of health misinformation is an emerging issue facing public health authorities. Health misinformation could delay proper treatment to patients, which could further add more casualties to the public health domain [28, 59, 60]. Often people tend to believe health-related information that is shared by their peers. Some of them tend to share their treatment experience or traditional remedies online. This information could be in a different context and may not be even accurate [33, 34]. Compared to health-related websites, the language used to detail the health information shared on social media will be simple and may not include essential details [35, 37]. Some studies reported that conspiracy theories and pseudoscience have escalated casualties [33]. Pseudoscience is the term referred to as the false claim, which pretends as if the shared misinformation has scientific evidence. The anti-vaccination movement on Twitter is one of the examples of pseudoscience [61]. Here the user might have shared the information due to the lack of scientific knowledge [35].

4.3.1 Characterizing healthcare misinformation

The attributes that characterize healthcare misinformation are distinctly different from other domains. Chua and Banerjee, [37] identified the characteristics of health misinformation as dread and wish. Dread is the rumor which creates more panic and unpleasant consequences. For example, in the wake of COVID-19, misinformation was widely shared on social media, which claimed that children 'died on the spot' after the mass COVID-19 vaccination program in Senegal, West Africa [61]. This message created panic among the citizens, as the misinformation was shared more than 7000 times on Facebook [61]. Wish is the type of rumor that gives hope to the receiver (e.g.,: rumor on free medicine distribution) [62]. Dread rumor looks more trustworthy and more likely to get viral. Dread rumor was the cause of violence against a minority group in India during COVID-19 [7]. Chua and Banerjee, [32] added pictorial and textual representations as the characteristics of health misinformation. The rumor that contains only text is textual rumor. Pictorial rumor on the other hand contains both text and images. However, Chua and Banerjee, [32] found that users prefer textual rumor than pictorial. Unlike rumors that are circulated during a natural disaster, health misinformation will be long-lasting, and it can spread cutting across boundaries. Personal involvement (the importance of information for both sender and receiver), rumor type and presence of counter rumor are some of the variables that can escalate users’ trusting and sharing behavior related to rumor [37]. The study of Madraki et al. [46] study on COVID-19 misinformation /disinformation reported that COVID-19 misinformation on social media differs significantly based on the languages, countries and their culture and beliefs. Acceptance of social media platforms as well as Governmental censorship also play an important role here.

Widespread misinformation could also change collective opinion [29]. Online users’ epistemic beliefs could control their sharing decisions. Chua and Banerjee, [32] argued that epistemologically naïve users (users who think knowledge can be acquired easily) are the type of users who accelerate the spread of misinformation on platforms. Those who read or share the misinformation are not likely to follow it [37]. Gu and Hong [34] examined health misinformation on mobile social media context. Mobile internet users are different from large screen users. The mobile phone user might have a more emotional attachment toward the gadget. It also motivates them to believe received misinformation. The corrective effort focused on large screen users may not work with mobile phone users or small screen users. Chua and Banerjee [32] suggested that simplified sharing options of platforms also motivate users to share the received misinformation before validating it. Shahi et al. [47] found that misinformation is also propagated or shared even by the verified Twitter handles. They become a part of misinformation transmission either by creating it or endorsing it by liking or sharing the information.

The focus of existing studies is heavily based on data from social networking sites such as Facebook and Twitter, although other platforms too escalate the spread of misinformation. Such a phenomenon was evident in the wake of COVID-19 as an intense trend of misinformation spread was reported on WhatsApp, TikTok, and Instagram.

4.4 Social media misinformation and politics

There have been several studies on the influence of misinformation on politics across the world [43, 44]. Political misinformation has been predominantly used to influence the voters. The USA Presidential election of 2016, French election of 2017 and Indian elections in 2019 have been reported as examples where misinformation has influenced election process [15, 17, 45]. During the 2016 USA election, the partisan effect was a key challenge, where false information was presented as if it was from an authorized source [39]. Based on a user's prior behavior on the platform, algorithms can manipulate the user's feed [40]. In a political context, fake news can create more harm as it can influence the voters and the public. Although, fake news has less ‘life’, it's consequences may not be short living. Verification of fake news takes time and by the time verification results are shared, fake news could achieve its goal [43, 48, 63].

4.4.1 Characterizing misinformation in politics

Confirmation bias has a dominant role in social media misinformation related to politics. Readers are more likely to read and engage with the information that confirms their preexisting beliefs and political affiliations and reject information that challenges it [46, 48]. For example, in the 2016 USA election, Pro-Trump fake news was accepted by Republicans [19]. Misinformation spreads quickly among people who have similar ideologies [19]. The nature of interface also could escalate the spread of misinformation. Kim and Dennis [36] investigated the influence of platforms' information presentation format and reported that social media platforms indirectly force users to accept certain information; they present information such that little importance is given to the source of information. This presentation is manipulative as people tend to believe information from a reputed source and are more likely to reject information that is from a less-known source [42].

Pennycook et al. [39], and Garrett and Poulsen [40] argued that warning tags (or flagging) on the headline can reduce the spread of misinformation. However, it is not practical to assign warning tags to all misinformation as it gets generated faster than valid information. The fact-checking process in social media also takes time. Hence, people tend to believe that the headlines which do not have warning tags are true and the idea of warning tags will thus not serve any purpose [39]. Furthermore, it could increase the reader's belief in warning tags and lead to misperception [39]. Readers tend to believe that all information is verified and consider untagged false information as more accurate. This phenomenon is known as the implied truth effect [39]. In this case, source reputation rating will influence the credibility of the information. The reader gives less importance to the source that has a low rating [17, 50].

5 Theoretical perspectives of social media misinformation

We identified six theories among the articles we reviewed in relation to social media misinformation. We found rumor theory was used most frequently among all the studies chosen for our review; the theory was used in four articles as a theoretical foundation [9, 11, 13, 37, 43]. Oh et al. [9], studied citizen-driven information processing on Twitter using rumor theory in three social crises. This paper identified four key variables (source ambiguity, personal involvement, and anxiety) that spread misinformation. The authors further examined the acceptance of hate rumors and the aftermath of community crisis based on the Bangalore mass exodus of 2012. Liu et al. [13], examined the reason behind the retransmission of messages using rumor theory in disasters. Hazel Kwon and Raghav Rao [43] investigated how internet surveillance by the government impacts citizens’ involvement with cyber-rumors during a homeland security threat. Diffusion theory has also been used in IS research to discern the adoption of technological innovation. Researchers have used diffusion theory to study the retweeting behavior among Twitter users (tweet diffusion) during extreme events [26]. This research investigated information diffusion during extreme events based on four major elements of diffusion: innovation, time, communication channels and social systems. Kim et al. [36] examined the effect of rating news sources on users’ belief in social media articles based on three different rating mechanisms expert rating, user article rating and user source rating. Reputation theory was used to show how users would discern cognitive biases in expert ratings.

Murungi et al. [38] used rhetorical theory to argue that fact-checkers have less effectiveness on fake news that spreads on social media platforms. The study proposed a different approaches by focusing on underlying belief structure that accepts misinformation. The theory was used to identify fake news and socially constructed beliefs in the context of Alabama’s senatorial election in 2017. Using third person effect as the theoretical ground, the characteristics of rumor corrections on Twitter platform have also been examined in the context of death hoax of Singapore’s first prime minister Lee Kuan Yew [28]. This paper explored the motives behind collective rumor and identified the key characteristics of collective rumor correction. Using situational crisis communication theory (SCCT), Paek and Hove [44] examined how government could effectively respond to risk-related rumors during national-level crises in the context of food safety rumor. Refuting rumor, denying it and attacking the source of rumor are the three rumor response strategies suggested by the authors to counter rumor-mongering (Table 2).

Table 2 Theories used in social media misinformation research

5.1 Determinants of misinformation in social media platforms

Figure 3 depicts the concepts that emerged from our review using a framework of Antecedents-Misinformation-Outcomes (AMIO) framework, an approach we adapt from Smith HJ et al. [66]. Originally developed to study information privacy, the Antecedent-Privacy-Concerns-Outcomes (APCO) framework provided a nomological canvas to present determinants, mediators and outcome variables pertaining to information privacy. Following this canvas, we discuss the antecedents of misinformation, mediators of misinformation and misinformation outcomes, as they emerged from prior studies (Fig. 3).

Fig. 3
figure 3

Determinants of misinformation

Anxiety, source ambiguity, trustworthiness, content ambiguity, personal involvement, social ties, confirmation bias, attractiveness, illiteracy, ease of sharing options and device attachment emerged as the variables determining misinformation in social media.

Anxiety is the emotional feeling of the person who sends or receives the information. If the person is anxious about the information received, he or she is more likely to share or spread misinformation [9]. Source ambiguity deals with the origin of the message. When the person is convinced of the source of information, it increases his trustworthiness and the person shares it. Content ambiguity addresses the content clarity of the information [9, 13]. Personal involvement denotes how much the information is important for both the sender and receiver [9]. Social ties, information shared by a family member or social peers will influence the person to share the information [9, 13]. From prior literature, it is understood that confirmation bias is one of the root causes of political misinformation. Research on attractiveness of the received information reveals that users tend to believe and share the information that is received on her or his personal device [34]. After receiving the misinformation from various sources, users accept it based on their existing beliefs, and social, cognitive factors and political factors. Oh et al. [15] observed that during crises, people by default have a tendency to believe unverified information especially when it helps them to make sense of the situation. Misinformation has significant effects on individuals and society. Loss of lives [9, 15, 28, 30], economic loss [9, 44], loss of health [32, 35] and loss of reputation [38, 43] are the major outcome of misinformation emerged from our review.

5.2 Strategies for controlling the spread of misinformation

Discourse on social media misinformation mitigation has resulted in prioritization of strategies such as early communication from the officials and use of scientific evidence [9, 35]. When people realize that the received information or message is false, they are less likely to share that information with others [15]. Other strategies are 'rumor refutation—reducing citizens' intention to spread misinformation by real information which reduces their uncertainty and serves to control misinformation [44]. Rumor correction models for social media platforms also employ algorithms and crowdsourcing [28]. Majority of the papers that we have reviewed suggested fact-checking by experts, source rating of the received information, attaching warning tags to the headlines or entire news [36], and flagging content by the platform owners [40] as the strategies to control the spread of misinformation. Studies on controlling misinformation in the public health context showed that the government could also seek the help of public health professionals to mitigate misinformation [31].

However, the aforementioned strategies have been criticized for several limitations. Most papers mentioned confirmation bias as having a significant impact on the misinformation mitigation strategies, especially in the political context where people tend to believe the information that matches their prior belief. Garrett and Poulsen [40] argued that during an emergency situation, misinformation recipient may not be able to characterize the misinformation as true or false. Thus, providing alternative explanation or the real information to the users have more effect than providing fact-checking report. Studies by Garrett and Poulsen [40], and Pennycook et al. [39] reveal a drawback of attaching warning tags to news headlines. Once the flagging or tagging of the information is introduced, the information with the absence of tags will be considered as true or reliable information. This creates an implied truth effect. Further, it is also not always practical to evaluate all social media posts. Similarly, Kim and Dennis [36] studied fake news flagging and found that fake news flags did not influence users’ belief. However, they created cognitive dissonance and users were in search of the truthfulness of the headline. Later in 2017 Facebook discontinued the fake news flagging service owing to its limitations [45]

6 Key research gaps and future directions

Although, misinformation is a multi-sectoral issue, our systematic review observed that interdisciplinary research on social media misinformation is relatively scarce. ‘Confirmation bias’ is one of the most significant behavioral problem that motivates the spread of misinformation. However, lack of research on it reveals the scope for future interdisciplinary research across the fields of Data Science, Information Systems and Psychology in domains such as politics and health care. In the disaster context, there is a scope for study on the behavior of a first respondent and an emergency manager to understand their information exchange pattern with the public. Similarly, future researchers could analyze communication patterns between citizens and frontline workers in the public health context, which may be useful to design counter-misinformation campaigns and awareness interventions. Since information disorder is a multi-sectoral issue, researchers need to understand misinformation patterns among multiple government departments for coordinated counter-misinformation intervention.

There is a further dearth of studies on institutional responses to control misinformation. To fill the gap, future studies could concentrate on the analysis of governmental and organizational interventions to control misinformation at the level of policies, regulatory mechanisms, and communication strategies. For example, in India there is no specific law against misinformation but there are some provisions in the Information Technology Act (IT Act) and Disaster Management Act which can control misinformation and disinformation. An example of awareness intervention is an initiative named ‘Satyameva Jayate’ launched in Kannur district of Kerala, India which focused on sensitizing children at school to spot misinformation [67]. As noted earlier, within the research on Misinformation in the political context, there is a lack of research on strategies adopted by the state to counter misinformation. Therefore, building on cases like 'Satyameva Jayate' would further contribute to knowledge in this area.

Technology-based strategies adopted by social media to control the spread of misinformation emphasize the corrective algorithms, keywords and hashtags as a solution [32, 37, 43]. However, these corrective measures have their own limitations. Misinformation corrective algorithms are ineffective if not used immediately after the misinformation has been created. Related hashtags and keywords are used by researchers to find content shared on social media platforms to retrieve data. However, it may not be possible for researchers to cover all the keywords or hashtags employed by users. Further, algorithms may not decipher content shared in regional languages. Another limitation of algorithms employed by platforms is that they recommend and often display content based on user activities and interests which limits the users access to information from multiple perspectives, thus reinforcing their existing belief [29]. A reparative measure is to display corrective information as 'related stories' for misinformation. However, Facebook’s related stories algorithm only activates when an individual clicks on an outside link, which limits the number of people who will see the corrective information through the algorithm which turns out to be a challenge. Future research could investigate the impact of related stories as a corrective measure by analyzing the relation between misinformation and frequency of related stories posted vis a vis real information.

Our review also found a scarcity of research on the spread of misinformation on certain social media platforms while studies being skewed toward a few others. Of the studies reviewed, 15 articles were concentrated on misinformation spread on Twitter and Facebook. Although, from recent news reports it is evident that largely misinformation and disinformation are spread through popular messaging platforms like the 'WhatsApp', ‘Telegram’, ‘WeChat’, and ‘Line’, research using data from these platforms are, however, scanty. Especially in the Indian context, the magnitude of problems arising from misinformation through WhatsApp are overwhelming [68]. To address the lacunae of research on messaging platforms, we suggest future researchers to concentrate on investigating the patterns of misinformation spreading on platforms like WhatsApp. Moreover, message diffusion patterns are unique to each social media platform; therefore, it is useful to study the misinformation diffusion patterns on different social media platforms. Future studies could also address the differential roles, patterns and intensity of the spread of misinformation on various messaging and photo/ video-sharing social networking services.

Evident from our review, most research on misinformation is based on Euro-American context and the dominant models proposed for controlling misinformation may have limited applicability to other regions. Moreover, the popularity of social media platforms and usage patterns are diverse across the globe consequent to cultural differences and political regimes of the region, therefore necessitating researchers of social media to take cognizance of empirical experiences of ' left-over' regions.

7 Conclusion

To understand the spread of misinformation on social media platforms, we conducted a systematic literature review in three important domains where misinformation is rampant: disaster, health, and politics. We reviewed 28 articles relevant to the themes chosen for the study. This is one of the earliest reviews focusing on social media misinformation research, especially based on three sensitive domains. We have discussed how misinformation spreads in the three sectors, the methodologies that have been used by researchers, theoretical perspectives, Antecedents-Misinformation-Outcomes (AMIO) framework for understanding key concepts and their inter-relationships, and strategies to control the spread of misinformation.

Our review also identified major gaps in IS research on misinformation in social media. This includes the need for methodological innovations in addition to experimental methods which have been widely used. This study has some limitations that we acknowledge. We might not have identified all relevant papers on spread of misinformation on social media from existing literature as some authors might have used different keywords and also due to our strict inclusion and exclusion criteria. There might also have been relevant publications in languages other than English which were not covered in this review. Our focus on three domains also restricted the number of papers we reviewed.