1 Media and the Human Experience

Empiricism is the notion that knowledge originates from sensory experience. Implicit in this statement is the idea that we can trust our senses. But in today’s world, much of the human experience is mediated through digital technologies. Our sensory experiences can no longer be trusted a priori. The evidence before us—what we see and hear and read—is, more often than not, manipulated.

This presents a profound challenge to a species that for most of its existence has relied almost entirely on sensory experience to make decisions. A group of researchers from multiple universities and multiple disciplines that study collective behavior recently assessed that the study of the impact of digital communications technologies on humanity should be understood as a “crisis discipline,” alongside the study of matters such as climate change and medicine (Bak-Coleman et al. 2021). Noting that since “our societies are increasingly instantiated in digital form,” they argue the potential of digital media to perturb the ways in which we organize ourselves, deal with conflict, and solve problems is profound:

Our social adaptations evolved in the context of small hunter-gatherer groups solving local problems through vocalizations and gestures. Now we face complex global challenges from pandemics to climate change—and we communicate on dispersed networks connected by digital technologies such as smartphones and social media.

The evidence, they say, is clear: in this new information environment, misinformation, disinformation, and media manipulation pose grave threats—from the spread of conspiracy theories (Narayanan et al. 2018), rejection of recommended public health interventions (Koltai 2020), subversion of democratic processes (Atlantic Council 2020), and even genocide (Łubiński 2020).

Media manipulation is only one of a series of problems in the digital information ecosystem—but it is a profound one. For most of modern history, trust and verifiability of media—whether printed material, audio and more recently images, video, and other modalities—have been linked to one’s ability to access and coordinate information to previously gathered facts and beliefs. Consequently, in order to deceive someone, the deceiver had to rely on the obfuscation of a media object’s salient features and hope that the consumer’s time and ability to perceive these differences was not sufficient. This generalization extends from boasts and lies told in the schoolyard, to false narratives circulated by media outlets, to large-scale disinformation campaigns pursued by nations during military conflicts.

A consistent property and limitation of deceptive practices is that their success is linked to two conditions. The first condition is that the ability to detect false information is linked to the amount of time we have to process and verify it before taking action on it. For instance, showing someone a brief picture of a photoshopped face is more likely to be taken as true if the viewer is given less time to examine it. The second condition is linked to the amount of time and effort required by the deceiver to create the manipulation. Hence, the impact and reach of a manipulation is limited by the resources and effort involved. A kid in school can deceive their friends with a photoshopped image, but it required Napoleon and the French Empire to flood the United Kingdom with fake pound notes during the Napoleonic wars and cause a currency crisis.

A set of technological advancements over the last 30 years has led to a tectonic shift in this dynamic. With the adoption of Internet and social media in the 1990s and 2000s, respectively, the marginal costs required to deceive a victim have plummeted. Any person on the planet with a modicum of financial capital is able to reach billions of people with their message, effectively breaking the resource dependence of a deceiver. This condition is an unprecedented one in the history of the world.

Furthermore, the maturation and spread of machine learning-powered media technologies has provided the ability to generate diverse media at scale, lowering the barriers to entry for all content creators. Today, the amount of work required to deliver mass content at Internet scale is exponentially decreasing. This is inevitably compounding the problem of trust online.

As a result, our society is approaching a precipice of far-reaching consequences. Media manipulation is easy for anyone, even if they are without significant resources or technical skills. While conversely the amount of media available to people continues to explode, the amount of time dedicated to verifying a single piece of content is also falling. As such, if we are to have any hope of adapting our human sensemaking to this digital age, we will need media forensics incorporated into our digital information environment in multiple layers. We will require automated tools to augment our human abilities to make sense of what we read, hear, and see.

2 The Threat to Democracy

One of the most pressing consequences of media manipulation is the threat to democracy. Concern over issues of trust and the role of information in democracies is severe. Globally, democracy has run into a difficult patch, with 2020 perhaps among its worst years in decades. The Economist Intelligence Unit’s Democracy Index found that as of 2020 only “8.4% of the world’s population live in a full democracy while more than a third live under authoritarian rule” (The Economist 2021). In the United States, the twin challenges of election disinformation and the fraught public discourse on the COVID-19 pandemic exposed weaknesses in the information ecosystem.

Technology—once regarded as a boon to democracy—is now regarded by some as a threat to it. For instance, a 2019 survey of 1000 experts 2 by the Pew Research Center and Elon University’s Imagining the Internet Center found that, when pressed on the impact of the use of technology by citizens, civil society groups, and governments on core aspects of democracy and democratic representation, 49% of the “technology innovators, developers, business and policy leaders, researchers, and activists” surveyed said technology would “mostly weaken core aspects of democracy and democratic representation in the next decade,” while 33% said technology would mostly strengthen democracies.

Perhaps the reason for such pessimism is that current interventions to manage trust in the information ecosystem are not taking place against a static backdrop. Social media has accompanied a decline of trust in institutions and the media generally, exacerbating a seemingly intractable set of social problems that create ripe conditions for the proliferation of disinformation and media manipulation.

Yet, the decade ahead will see even more profound disruptions to the ways in which information is produced, distributed, sought, and consumed. A number of technological developments are driving the disruption, including the installation of 5G wireless networks, which will enable richer media experiences, such as AR/VR and having more devices trading information more frequently. The increasing advance of various computational techniques to generate, manipulate, and target content is also problematic. The pace of technological developments suggest that, by the end of the decade, synthetic media may radically change the information ecosystem and demand new architectures and systems to aid sensemaking.

Consider a handful of recent developments:

  • Natural text to speech enables technologies like Google Assistant, but is also being used for voice phishing scams.

  • Fully generated realistic images from StyleGAN2—a neural network model—are being leveraged to make fake profiles in order to deceive on social media.

  • Deep Fake face swapping videos are now capable of fooling many people who do not pay close attention to the video.

It is natural, in this environment, that the study of the threat itself is a thriving area of inquiry. Experts are busy compiling threats, theorizing threat models, studying the susceptibility of people to different types of manipulations, exploring what interventions build resilience to deception, helping to inform the debate about what laws and rules need to be in place, and offering input on the policies technology platforms should have in place to protect users from media manipulation.

In this chapter, we will focus on the emerging threat landscape, in order to give the reader a sense of the importance of media forensics as a discipline. While the technical work of discerning text, images, video, and audio and their myriad combinations may not seem to qualify as a “crisis discipline,” we believe that it is indeed.

3 New Technologies, New Threats

In this section, we describe selected research trends with high potential impact on the threat landscape of synthetic media generation. We hope to ground the reader in the art of the possible at the cutting edge of research and provide some intuition to where the field is headed. We briefly explain the key contributions and why we believe they are significant for students and practitioners of media forensics. This list is not exhaustive.

These computational techniques will underpin synthetic media applications in the future.

3.1 End-to-End Trainable Speech Synthesis

Most speech generation systems include several processing steps that are customarily trained in isolation (e.g., text-to-spectrogram generators, vocoders). Moreover, while generative adversarial networks (GANs) have made tremendous progress on visual data, their application to human speech still remains limited. A recent paper addresses both of these issues with an end-to-end adversarial training protocol (Donahue et al. 2021). The paper describes a fully differentiable feedforward architecture for text-to-speech synthesis, which further reduces the amount of the necessary training-time supervision by eliminating the need for linguistic feature alignment. This further lowers the barrier for engineering novel speech synthesis systems, simplifying the need for data pre-processing to generate speech from textual inputs.

Another attack vector that has been undergoing rapid progress is voice cloning (Hao 2021). Voice cloning can synthetically convert a short audio recording of one person’s voice to produce a very similar voice applied in various inauthentic capacities. A recent paper proposes a novel speech synthesis system, NAUTILUS, which brings text-to-speech and voice cloning under the same framework (Luong and Yamagishi 2020). This demonstrates the possibility that a single end-to-end model could be used for multiple speech synthesis tasks and could lead to unification recently seen in natural language models (e.g., BERT Devlin et al. 2019 where a single architecture beats state of the art in multiple benchmark problems).

3.2 GAN-Based Codecs for Still and Moving Pictures

Generative adversarial networks (GANs) have revolutionized image synthesis as they rapidly evolved from small-scale toy datasets to high-resolution photorealistic content. They exploit the capacity of deep neural networks to learn complex manifolds and enable generative sampling of novel content. While the research started from unconditional, domain-specific generators (e.g., human faces), recent work focuses on conditional generation. This regime is of particular importance for students and practitioners of media forensics as it enables control of the generation process.

One area where this technology is underappreciated with regard to its misuse potential is lossy codecs for still and moving pictures. Visual content abounds with inconsequential background content or abstract textures (e.g., grass, water, clouds) which can be synthesized in high quality instead of explicit coding. This is particularly important in limited bandwidth regimes that typically occur in mobile applications, social media, or video conferencing. Examples of recent techniques in this area include GAN-based videoconferencing compression from Nvidia (Nvidia Inc 2020) and the first high-fidelity GAN-based image codec from Google (Mentzer et al. 2020).

This may have far-reaching consequences. Firstly, effortless manipulation of video conference streams may lead to novel man-in-the-middle attacks and further blur the boundary between acceptable and malicious alterations (e.g., while replaced backgrounds are often acceptable, the faces or expressions are probably not). Secondly, the ability to synthesize an infinite number of content variations (e.g., decode the same image with different looking trees) poses a threat to reverse image search—one of the very few reliable techniques for fact checkers today.

3.3 Improvements in Image Manipulation

Progress in machine learning, computer vision, and graphics has been constantly lowering the barrier (i.e., skill, time, compute) for convincing image manipulation. This threat is well recognized, hence we simply report a novel manipulation technique recently developed by University of Washington and Adobe (Zhang et al. 2020). The authors propose a method for object removal from photographs that automatically detects and removes the object’s shadow. This may be another blow to image forensic techniques that look for (in)consistency of physical traces in the photographs.

3.4 Trillion-Param Models

The historical trend of throwing more compute at a problem to see where the performance ceiling is continuous with large-scale language models (Radford and Narasimhan 2018; Kaplan et al. 2020). Following on the heels of the much popularized GPT-3 (Brown et al. 2020), Google Brain recently published their new Switch Transformer model (Fedus et al. 2021)—a \(\times \) 10 parameter increase over GPT-3. Interestingly, research trends related to increasing efficiency and reducing costs are paralleling the development of these large models.

With large sparse models, it is becoming apparent that lower precision (e.g., 16-bit) can be effectively used, combined with new chip architecture improvements targeting these lower bit precisions (Wang et al. 2018; Park and Yoo 2020). Furthermore, because of the scales of data needed to train these enormous models, sparsification techniques are further being explored (Jacobs et al. 1991; Jordan and Jacobs 1993). New frameworks are also emerging that facilitate distribution of ML models across many machines (e.g., Tensorflow Mesh that was used in Google’s Switch Transformers) (Shazeer et al. 2018). Finally, an important trend that is also stemming from this development is improvements in compression and distillation of models, a concept that was explored in our first symposium as a potential attack vector.

3.5 Lottery Tickets and Compression in Generative Models

Following in the steps of a landmark 2018 paper proposing the neural lottery hypothesis (Frankle and Carbin 2019), work continues in ways of discovering intelligent compression and distillation techniques to prune large models without losing accuracy. A recent paper being presented at ICLR 21 has applied this hypothesis to GANs (Chen et al. 2021). There have also been interesting approaches investigating improvements in the training dynamics of subnetworks by leveraging differential solvers to stabilize GANs (Qin et al. 2020). The Holy Grail around predicting subnetworks or initialization weights still remains elusive, and so the trend of training ever larger parameter models, and then attempting to compress/distill them, will continue until the next ceiling is reached.

4 New Developments in the Private Sector

To understand how the generative threat landscape is evolving, it is important to track developments from industry that could be used in media manipulation. Here, we include research papers, libraries for developers, APIs for the general public, as well as polished products. The key players include research and development divisions of large corporations, such as Google, Nvidia, and Adobe, but we also see relevant activities from the startup sector.

4.1 Image and Video

Nvidia Maxine (Nvidia Inc 2020)—Nvidia released a fully accelerated platform software development kit for developers of video conferencing services to build and deploy AI-powered features that use state-of-the-art models in their cloud. A notable feature includes GAN-based video compression, where human faces are synthesized based on facial landmarks and a single reference image. While this significantly improves user experience, particularly for low-bandwidth connections, it opens up potential for abuse.

Fig. 2.1
figure 1

Synthesized images generated by DALL-E based on a prompt requesting “snail-like tanks”

Smart Portraits and Neural Filters in Photoshop (Nvidia Inc 2021)—Adobe introduced neural filters into their flagship photo editor, Photoshop. The addition includes smart portraits, a machine-learning-based face editing feature which brings the latest face synthesis capabilities to the masses. This development continues the trend of bringing advanced ML-power image editing to mainstream products. Other notable examples of commercially available advanced AI editing tools include Anthropic’s PortraitPro (Anthropics Inc 2021) and Skylum’s Luminar (Skylum Inc 2021).

ImageGPT (Chen et al. 2020) and DALL-E (Ramesh et al. 2021)—Open AI is experimenting with extending their GPT-3 language model to other data modalities. ImageGPT is an autoregressive image synthesis model capable of convincing completion of image prompts. DALL-E is an extension of this idea that combines visual and language tokens. It allows for image synthesis based on textual prompts. The model shows a remarkable capability in generating plausible novel content based on abstract concepts, e.g., “avocado chairs” or “snail tanks,” see Fig. 2.1.

Runway ML (RunwayML Inc 2021)—Runway ML is a Brooklyn-based startup that provides an AI-powered tool for artists/creatives. Their software brings most recent open-source ML models to the masses and allows for their adoption in image/video editing workflows.

Imaginaire (Nvidia Inc 2021)—Nvidia released a PyTorch library that contains optimized implementation of several image and video synthesis methods. This tool is intended for developers and provides various models for supervised and unsupervised image-to-image and video-to-video translation.

4.2 Language Models

GPT-3 (Brown et al. 2020)—OpenAI developed a language model with 175-billion parameters. It performs well in few-shot learning scenarios, producing nuanced text and functioning code. Shortly after, the model has been released via a Web API for general-purpose text-to-text mapping.

Switch Transformers (Fedus et al. 2021)—Six months after the GPT-3 release, Google announced their record-breaking language model featuring over a trillion parameters. The model has been described in a detailed technical manuscript, and has been published along with a Tensorflow Mesh implementation.

CLIP (Radford et al. 2021)—Released in early January 2021 by OpenAI, CLIP is an impressive zero-shot image classifier. It pre-trained on 400 million text-to-image pairs. Given a list of descriptions, CLIP will make a prediction for which descriptor, or caption, a given image should be paired.

5 Threats in the Wild

In this section, we wish to provide a summary of observed disinformation campaigns and scenarios that are relevant to students and practitioners of media forensics. We would like to underscore new tactics and evolving resources and capabilities of adversaries.

5.1 User-Generated Manipulations

Thus far, user-generated media, aimed at manipulating public opinion, by and large, have not made use of sophisticated media manipulation capabilities beyond image, video, and sound editing software available to the general public. Multiple politically charged manipulated videos emerged and gained millions of views during the second half of 2020. Documented cases related to the US elections include manipulation of the visual background of a speaker (Reuters Staff 2021c), adding manipulated audio (Weigel 2021), cropping of a footage (Reuters Staff 2021a), manipulated editing of a video, and an attempt to spread false claim that an original video is a “deep fake” production (Reuters Staff 2021b).

In all of the cases mentioned, the videos gained millions of views within hours on Twitter, YouTube, Facebook, Instagram, TikTok, and other platforms, before they were taken down or flagged to emphasize they were manipulated. Often the videos emerged from an anonymous account on social media, but were shared widely, in part by prominent politicians and campaign accounts on Twitter, Facebook, and other platforms. Similar incidents emerged in India (e.g., an edited version of a video documenting police brutality) (Times of India Anam Ajmal 2021).

Today, any user can employ AI-generated images for creating a fake person. As recently reported in the New York Times (Hill and White 2021), Generated Photos (Generated Photos Inc 2021) can create “unique, worry-free” fake persons for “$2.99 or 1,000 people for $1000.” Similarly, there’s open advertising on Twitter by Rosebud AI for creating human animation (Rosebud AI Inc 2021).

Takedowns of far-right extremists and conspiracy-related accounts by several platforms, including Twitter, Facebook, Instagram, and WhatsApp announcement of changes to its privacy policies resulted in reports of migration to alternative platforms and chat services. Though it is still too early to assess the long-term impact of this change, it has the potential of impacting consumption and distribution of media, particularly among audiences that are already keen to proliferate conspiracy theories and disinformation.

5.2 Corporate Manipulation Services

In recent years, investigations by journalists, researchers, and social media platforms have documented several information operations that were conducted by state actors via marketing and communications firms around the world. States involved included, among others, Saudi Arabia, UAE, and Russia (CNN Clarissa Ward et al. 2021). Some researchers claim outsourcing information operations has become “the new normal,” and so far the operations that were exposed did not spread original synthetic media (Grossman and Ramali 2021).

For those countries seeking the benefits of employing inauthentic content but unable to internally develop such a capability, there are plenty of manipulation companies available for hire. Regimes incapable of building a manipulation capability, deprived of the technological resources, and no longer need to worry as an entirely new industry of Advanced Persistent Manipulators (APM) (GMFUS Clint Watts 2021) have emerged in the private sector to meet the demand. Many advertising agencies, able to make manipulated content with ease, are hired by governments for public relations purposes and blur the lines between authentic and inauthentic contents.

The use of a third party by state actors tends to be limited to a targeted campaign. In 2019, reports by Twitter-linked campaigns run by the Saudi firm Smaat to the Saudi government (Twitter Inc 2021). The firm used automated tools to amplify pro-Saudi messages, using networks of thousands of accounts. The FBI has accused the firm’s CEO for being the Saudi government agent in an espionage case involving two Saudi former Twitter employees (Paul 2021).

In 2020, a CNN investigation exposed an outsourced effort by Russia, through Ghana and Nigeria to impact U.S. politics. The operation targeted African American demographics in the United States using accounts on Facebook, Twitter, and Instagram. It was mainly manually executed by a dozen troll farm employees in West Africa (CNN Clarissa Ward et al. 2021).

5.3 Nation State Manipulation Examples

For governments seeking to utilize, acquire, or develop synthetic media tools, the resources and technology capability needed to create and harness these tools are significant, hence the rise of third-party actors is described in Sect. 2.5.2. Below is a sampling of recent confirmed or presumed nation state manipulations.

People’s Republic of China

Over the last 2 years, Western social media platforms have encountered a sizeable amount of inauthentic images and video attributed or believed to be stemming from China (The Associated Press ERIKA KINETZ 2021; Hatmaker 2021). To what degree these attributions are directly connected to the Chinese government is unknown, but the incentives align to suggest the Chinese Communist Party (CCP) controlling the state, and the country’s information environment creates or condones the deployment of these malign media manipulations.

Australia has made the most public stand against Chinese manipulations. In November 2020, Chinese foreign minister and leading CCP Twitter provocateur posted a fake image showing an Australian soldier with a bloody knife next to a child (BBC Staff 2021). The image sought to inflame tensions in Afghanistan where Australian soldiers have recently been found to have killed Afghan prisoners during the 2009 to 2013 time period.

Fig. 2.2
figure 2

Screenshots of posts from Facebook and Twitter

Aside from this highly public incident and Australian condemnation, signs of Chinese employment of inauthentic media continue to mount. Throughout the fall of 2019 and 2020, Chinese language YouTube news channels proliferated. These new programs, created in high volume and distributed on a vast scale, periodically mix real information and real people with selectively sprinkled inauthentic content, much of which is difficult to assess as an intended manipulation or a form of narrated recreation of actual or perceived events. Graphika has reported on these “political spam networks” that distribute pro-Chinese propaganda, first targeting Hong Kong protests in 2019 (Nimmo et al. 2021), and the more recent addition of English language YouTube videos focusing on US policy in 2020 (Nimmo et al. 2021). Reports showed the use of inauthentic or misattributed content in both visual and audio forms. Accounts used AI-generated account profile images and dubbed voice overs.

A similar network of inauthentic accounts were spotted immediately following the 2021 US presidential inauguration. Users with Deep Fake profile photos, which bear a notable similarity to AI-generated photos on thispersondoesnotexist.com, tweeted verbatim excerpts lifted from the Chinese Ministry of Foreign Affairs, see Fig. 2.2. These alleged Chinese accounts were also enmeshed in a publicly discussed campaign reported by The Guardian as amplifying a debunked ballot video surfacing among voter fraud allegations surrounding the U.S. presidential election.

The accounts’ employment of inauthentic faces, Fig. 2.3, while somewhat sloppy, does offer a way to avoid detection as has been seen repeatedly when stolen real photos dropped into image or facial recognition software are easily detected.

Fig. 2.3
figure 3

Three examples of Deep Fake profile photos from a network of 13 pro-China inauthentic accounts on Twitter. Screengrabs of photos directly from Twitter

France

Individuals associated with the French military have been linked to a 2020 information operation, aimed at discrediting a similar Russian IO employing trolls to spread disinformation in the Central African Republic and other African countries (Graphika and The Stanford Internet Observatory 2021). The French operation used fake media to create Facebook accounts. This is an interesting example of tactical strategy that employs a fake campaign to fight disinformation.

Russia/IRA

The Russian-sponsored Internet Research Agency was found to have created a small number of fake Facebook, Twitter, and LinkedIn accounts that were amplifying a website posing as an independent news outlet, peacedata.net (Nimmo et al. 2021). While IRA troll accounts have been prolific for years, this is the first observed instance of their use of AI-generated images. Fake profiles were created to give credibility to peacedata.net, who hired unwitting freelance writers to provide left-leaning content for audiences, mostly in the US and UK, as part of a calculated, albeit ineffective, disruption campaign.

5.4 Use of AI Techniques for Deception 2019–2020

During the year 2019–2020, at the height of the COVID-19 pandemic, nefarious generative AI techniques gained popularity. The following are examples of various modalities employed for the purposes of deception and information operations (IO).

Ultimately, we also observe that commercial developments are rapidly advancing the capability for video and image manipulation. While resources required to train GANs and other models will remain a prohibitive factor in the near future, improvements in image codecs, larger training parameters, lowering precision, and compression/distillation create additional threat opportunities. The threat landscape will change rapidly in the next 3 to 5 years, requiring media forensics capabilities to advance quickly to keep up with the capabilities of adversaries.

GAN-Generated Images

The most prevalent documented use so far of AI capabilities in IO has been in utilizing available free tools to create fake personas online. Over the past year, the use of GAN technology has become increasingly common, in operations linked to state actors, or organizations driven by political motivation, see Fig. 2.3. The technology was mostly applied in generating profile pictures for inauthentic users on social media, relying on open-source available tools such as thispersondoesntexist.com and Generated Photos (Generated Photos Inc 2021).

Several operations used GANs to create hundreds to thousands of accounts that spammed platforms with content and comments, in an attempt to amplify messages. Some of these operations were attributed to Chinese actors and to PR firms.

Other operations limited the use of GAN-generated images to create a handful of more “believable” “fake personas” that appeared as creators of content. At least one of those efforts was attributed to Russia’s IRA. In at least one case, a LinkedIn account with a GAN-generated image was used as part of a suspected espionage effort to contact US national security experts (Associated Press 2021).

Synthetic Audio

In recent years, there were at least three attempts documented, one of which was successful, to defraud companies and individuals using synthetic audio. Two cases included use of synthetic audio to impersonate the company CEO in a phone call. In 2019, a British energy company transferred over $200,000 following a fraudulent phone call, generated with synthetic audio to impersonate the parent company’ CEO, requesting the money transfer (Wall Street Journal Catherine Stupp 2021). A similar attempt to defraud a US-based company failed in 2020 (Nisos 2021). Symantec has reported at least three cases in 2019 of synthetic audio being deployed to defraud private companies (The Washington Post Drew Harwell 2021).

Deception Using GPT-3

While user-generated manipulations aiming at impacting public opinion have avoided advanced technology so far, there was at least one attempt to use GPT-3 in generating automated text, by an anonymous user/bot on Reddit (The Next Web Thomas Macaulay 2021). The effort seemed to abuse a third-party app access to OpenAI API and to give hundreds of comments. In some cases, the bot published conspiracy theories and made false statements.

Additionally, recent reports noted GPT-3 has an “anti-Muslim” bias (The Next Web Tristan Greene 2021). This conclusion points to the potential damage that could be caused by apps that are using the technology, even with no malicious intentions.

IO and Fraud Using AI Techniques

Over a dozen cases of use of synthetic media for information operations and other criminal activities were recorded during the past couple of years. In December 2019, social media platforms noted the first use of AI techniques in a case of “coordinated inauthentic behavior” which was attributed to The BL, a company with ties to the publisher of The Epoch Times, according to Facebook. The company created hundreds of fake Facebook accounts using GAN techniques to create synthetic profile pictures. The use of GAN to create false portrait pictures of non-existent personas repeated itself in more than a handful of IO that were discovered during 2020, including in operations that were attributed by social media platforms to Russian IRA, to China, and to a US-based PR firm.

6 Threat Models

Just as the state of the art in technologies for media manipulation has advanced, so has theorizing around threat models. Conceptual models and frameworks used to understand and analyze threats and describe adversary behavior are evolving alongside what is known of the threat landscape. It is useful for students of media forensics to understand these frameworks generally, as they are used in the field by analysts, technologists, and others who work to understand and mitigate against manipulations and deception.

It is useful to apply different types of frameworks and threat models for different kinds of threats. There is a continuum between the sub-disciplines and sub-concerns that form media and information manipulation. At times it is possible to seek to put too much under the same umbrella. For instance, when analyzing COVID-19 misinformation analysis, the processes and models that are used to tackle that are extremely different than the processes and models that we use to tackle covert influence operations that tend to be state-sponsored, or to understand attacks that are primarily financially motivated and conducted by criminal groups. It’s important to think about what models apply in what circumstances, and specifically what disciplines can contribute to which models.

The below examples do not represent a comprehensive selection of threat models and frameworks, but serve as examples.

Fig. 2.4
figure 4

BEND framework

6.1 Carnegie Mellon BEND Framework

Developed by Dr. Kathleen Carley, the BEND framework is a conceptual system that seeks to make sense of influence campaigns, and their strategies and tactics. Carley notes that the “BEND framework is the product of years of research on disinformation and other forms of communication based influence campaigns, and communication objectives of Russia and other adversarial communities, including terror groups such as ISIS that began in late December of 2013.”Footnote 1 The letters that are represented in the acronym BEND refer to elements of the framework, such as “B” which references “Back, Build, Bridge, and Boost” influence maneuvers.

6.2 The ABC Framework

Initially developed by Camille Francois, then Chief Innovation Officer at Graphika, the ABC framework offers a mechanism to understand “three key vectors characteristic of viral deception” in order to guide potential remedies (François 2020). In this framework:

  • A is for Manipulative Actors. In Francois’s framework, manipulative actors engage knowingly and with clear intent in their deceptions.

  • B is for Deceptive Behavior. Behavior encompasses all of the techniques and tools a manipulative actor may use, from engaging a troll farm or a bot army to producing manipulated media.

  • C is for Harmful Content. Ultimately, it is the message itself that defines a campaign and often its efficacy.

Fig. 2.5
figure 5

ABCDE framework

Later, Alexandre Alaphilippe, Executive Director of the EU DisinfoLab, proposed adding a D for Distribution, to describe the importance of the means of distribution to the overall manipulation strategy (Brookings Inst Alexandre Alaphilippe 2021). And yet another letter joined the acronym courtesy of James Pamment in the fall of 2020-Pamment proposed E to evaluate the Effect of the manipulation campaign (Fig. 2.5).

6.3 The AMITT Framework

The Misinfosec and Credibility Coalition communities, led by Sara-Jayne Terp, Christopher Walker, John Gray, and Dr. Pablo Breuer, developed the AMITT (Adversarial Misinformation and Influence Tactics and Techniques) frameworks to codify the behaviours (tactics and techniques) of disinformation threats and responders, to enable rapid alerts, risk assessment, and analysis. AMITT proposes diagramming and treating equally both threat actor behaviors and response behaviors by the target (Gray and Terp 2019). The AMITT TTPs application of tactics, techniques, procedures, and frameworks is the disinformation version of the ATT&K model, which is an information security standard threat model. The AMITT STIX model is the disinformation version of the STIX model, which is an information security data sharing standard.

6.4 The SCOTCH Framework

In order to enable analysts “to comprehensively and rapidly characterize an adversarial operation or campaign,” Dr. Sam Blazek developed the SCOTCH framework (Blazek 2021) (Fig. 2.6).

  • S—Source—the actor who is responsible for the campaign.

  • C—Channel—the platform and its affordances and features that deliver the campaign.

  • O—Objective—the objective of the actor.

  • T—Target—the intended audience for the campaign.

  • C—Composition—the language, content, or message of the campaign.

  • H—Hook—the psychological phenomena being exploited by the campaign.

Fig. 2.6
figure 6

AMITT framework

Blazek offers this example of how SCOTCH might describe a campaign:

At the campaign level, a hypothetical example of a SCOTCH characterization is: Non-State Actors used Facebook, Twitter, and Low-Quality Media Outlets in an attempt to Undermine the Integrity of the 2020 Presidential Election among Conservative American Audiences using Fake Images and Videos and capturing attention by Hashtag Hijacking and Posting in Shared-Interest Facebook Groups via Cutouts (Fig. 2.7).

Fig. 2.7
figure 7

Deception model effects

6.5 Deception Model Effects

Still other frameworks are used to explain how disinformation diffuses in a population. Mathematicians have employed game theory to look at information as quantities that diffuse according to certain dynamics over networks. For instance, Carlo Kopp, Kevin Korb, and Bruce Mills model cooperation and diffusion in populations (Kopp et al. 2018) exposed to “fake news”:

This model “depicts the relationships between the deception models and the components of system they are employed to compromise.”

6.6 4Ds

Some frameworks focus on the goals manipulators. Ben Nimmo, then at the Central European Policy Institute and now at Facebook, where he leads global threat intelligence and strategy against influence operations, developed the 4Ds to describe the tactics of Russian disinformation campaigns (StopFake.org 2021). “They can be summed up in four words: dismiss, distort, distract, dismay,” wrote Nimmo (Fig. 2.8).

  • Dismiss may involve denying allegations made against Russia or a Russian actor.

  • Distortion includes outright lies, fabrications, or obfuscations of reality.

  • Distract includes methods of turning attention away from one phenomenon to another.

  • Dismay is a method to get others to regard Russia as too significant a threat to confront.

Fig. 2.8
figure 8

Advanced persistent manipulators framework

6.7 Advanced Persistent Manipulators

Clint Watts, a former FBI Counterterrorism official, similarly thinks about the goals of threat actors and describes, in particular, nation states and other well-resourced manipulators as Advanced Persistent Manipulators.

They use combinations of influence techniques and operate across the full spectrum of social media platforms, using the unique attributes of each to achieve their objectives. They have sufficient resources and talent to sustain their campaigns, and the most sophisticated and troublesome ones can create or acquire the most sophisticated technology. APMs can harness, aggregate, and nimbly parse user data and can recon new adaptive techniques to skirt account and content controls. Finally, they know and operate within terms of service, and they take advantage of free-speech provisions. Adaptive APM behavior will therefore continue to challenge the ability of social media platforms to thwart corrosive manipulation without harming their own business model (Fig. 2.8).

Fig. 2.9
figure 9

Synthetic media scenarios and financial harm (Carnegie Endowment for International Peace JON BATEMAN 2020)

6.8 Scenarios for Financial Harm

Jon Bateman, a fellow in the Cyber Policy Initiative of the Technology and International Affairs Program at the Carnegie Endowment for International Peace, shows how assessments of threats can be applied to specific industry areas or targets in his work on the threats of media manipulation to the financial system (Carnegie Endowment John Batemant 2020).

In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

Fig. 2.10
figure 10

There are now hundreds of civil society, government, and private sector groups addressing disinformation across the globe. (Cogsec Collab 2021)

7 Investments in Countering False Media

Just as there is substantial effort at theorizing and implementing threat models to understand media manipulation, there is a growing effort in civil society, government, and the private sector to contend with disinformation. There is a large volume of work in multiple disciplines to categorize both the threats and many key indicators of population susceptibility to disinformation, including new work on resilience, dynamics of exploitative activities, and case studies on incidents and actors, including those who are not primarily motivated by malicious intent. This literature combines with a growing list of government, civil society and media and industry interventions aimed at mitigating the harms of disinformation.

There are now hundreds of groups and organizations in the United States and around the world working on various approaches and problem sets to monitor and counter disinformation in democracies, see Fig. 2.10. This emerging infrastructure presents an opportunity to create connective tissue between these efforts, and to provide common tools and platforms that enable their work. Consider three examples:

  1. 1.

    The Disinfo Defense League: A project of the Media Democracy Fund, the DDL is a consortium of more than 200 grassroots organizations across the U.S. that helps to mobilize minority communities to counter disinformation campaigns (ABC News Fergal Gallagher 2021).

  2. 2.

    The Beacon Project: Established by the nonprofit IRI combats state-sponsored influence operations, in particular, ones from Russia. The project identifies and exposes disinformation and works with local partners to facilitate a response (Beacon Project 2021).

  3. 3.

    CogSec Collab: The Cognitive Security Collaborative that maps and brings together information security researchers, data scientists, and other subject-matter experts to create and improve resources for the protection and defense of the cognitive domain (Cogsec Collab 2021).

At the same time, there are significant efforts underway to contend with the implications of these developments in synthetic media and the threat vectors they create.

7.1 DARPA SEMAFOR

The Semantic Forensics (SemaFor) program under the Defense Advanced Research Projects Agency (DARPA) represents a substantial research and development effort to create forensic tools and techniques to develop semantic understanding of content and identify synthetic media across multiple modalities (DARPA 2021).

7.2 The Partnership on AI Steering Committee on Media Integrity Working Group

The Partnership on AI has hosted an industry led effort to create similar detectors (Humprecht et al. 2020). Technology platforms and universities continue to develop new tools and techniques to determine provenance and identify synthetic media and other forms of digital media of questionable veracity. The Working Group brings together organizations spanning civil society, technology companies, media organizations, and academic institutions will be focused on a specific set of activities and projects directed at strengthening the research landscape related to new technical capabilities in media production and detection, and increasing coordination across organizations implicated by these developments.

7.3 JPEG Committee

In October 2020, the JPEG committee released a draft document initiating a discussion to explore “if a JPEG standard can facilitate a secure and reliable annotation of media modifications, both in good faith and malicious usage scenarios” (Joint Bi level Image Experts Group and Joint Photographic Experts Group 2021). The committee noted the need to consider revision of standardization practices to address pressing challenges, including use of AI techniques (“Deep Fakes”) and use of authentic media out of context for the purposes of spreading misinformation and forgeries. The paper notes two areas of focus: (1) having a record of modifications of a file within its metadata and (2) proving the record is secured and accessible so provenance is traceable.

7.4 Content Authenticity Initiative (CAI)

The CAI was founded in 2019 and led by Adobe, Twitter, and The New York Times, with the goal of establishing standards that will allow better evaluation of content. In August 2020, the forum published its white paper “Setting the Standard for Digital Content Attribution” highlighting the importance of transparency around provenance of content (Adobe, New York Times, Twitter, The Content Authenticity Initiative 2021). CAI will “provide a layer of robust, tamper-evident attribution and history data built upon XMP, Schema.org, and other metadata standards that goes far beyond common uses today.”

7.5 Media Review

Media Review is an initiative led by the Duke Reporters Lab to provide the fact checker community a schema that would allow evaluation of videos and images, through tagging of different types of manipulation (Schema.org 2021). Media Review is based on the ClaimReview project, and is currently a pending schema.

8 Excerpts on Susceptibility and Resilience to Media Manipulation

Like other topics in forensics, media manipulation is directly tied to psychology. Though a thorough treatment of this topic is outside the scope of this chapter, we would like to expose the reader to some foundational work in these areas of research.

The following section is a collection of critical ideas from different areas of psychological mis/disinformation research. The purpose of these is to convey powerful ideas in the authors’ own words. We hope that readers will select those of interest and investigate those works further.

8.1 Susceptibility and Resilience

Below is a sample of recent studies on various aspects of exploitation via disinformation which are summarized and quoted. We categorize these papers by overarching topics, but note that many bear relevant findings across multiple topics.

Resilience to online disinformation: A framework for cross-national comparative research (Humprecht et al. 2020)

This paper seeks to develop a cross-country framework for analyzing disinformation effects and population susceptibility and resilience. Three country clusters are identified: media-supportive; politically consensus-driven (e.g., Canada, many Western European democracies); polarized (e.g., Italy, Spain, Greece); and low-trust, politicized, and fragmented (e.g., United States). Media-supportive countries in the first cluster are identified as most resilient to disinformation, and the United States is identified as most vulnerable to disinformation owing to its “high levels of populist communication, polarization, and low levels of trust in the news media.”

Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking (Pennycook and Rand 2020)

The tendency to ascribe profundity to randomly generated sentences – pseudo-profound bullshit receptivity – correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim their level of knowledge also judge fake news to be more accurate. We also extend previous research indicating that analytic thinking correlates negatively with perceived accuracy by showing that this relationship is not moderated by the presence/absence of the headline’s source (which has no effect on accuracy), or by familiarity with the headlines (which correlates positively with perceived accuracy of fake and real news).

Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines (Bago et al. 2020)

The Motivated System 2 Reasoning (MS2R) account posits that deliberation causes people to fall for fake news, because reasoning facilitates identity-protective cognition and is therefore used to rationalize content that is consistent with one’s political ideology. The classical account of reasoning instead posits that people ineffectively discern between true and false news headlines when they fail to deliberate (and instead rely on intuition)... Our data suggest that, in the context of fake news, deliberation facilitates accurate belief formation and not partisan bias.

Political psychology in the digital (mis) information age: A model of news belief and sharing (Van Bavel et al. 2020)

This paper reviews complex psychological risk factors associated with belief in misinformation such as partisan bias, polarization, political ideology, cognitive style, memory, morality, and emotion. The research analyzes the implications and risks of various solutions, such as fact-checking, “pre-bunking,” reflective thinking, de-platforming of bad actors, and media literacy.

Misinformation and morality: encountering fake news headlines makes them seem less unethical to publish and share (Effron and Raj 2020)

Experimental results and a pilot study indicate that “repeatedly encountering misinformation makes it seem less unethical to spread—-regardless of whether one believes it. Seeing a fake-news headline one or four times reduced how unethical participants thought it was to publish and share that headline when they saw it again – even when it was clearly labelled false and participants disbelieved it, and even after statistically accounting for judgments of how likeable and popular it was. In turn, perceiving it as less unethical predicted stronger inclinations to express approval of it online. People were also more likely to actually share repeated (vs. new) headlines in an experimental setting. We speculate that repeating blatant misinformation may reduce the moral condemnation it receives by making it feel intuitively true, and we discuss other potential mechanisms.”

When is Disinformation (In) Credible? Experimental Findings on Message Characteristics and Individual Differences. (Schaewitz et al. 2020)

“...we conducted an experiment (N \(=\) 294) to investigate effects of message factors and individual differences on individuals’ credibility and accuracy perceptions of disinformation as well as on their likelihood of sharing them. Results suggest that message factors, such as the source, inconsistencies, subjectivity, sensationalism, or manipulated images, seem less important for users’ evaluations of disinformation articles than individual differences. While need for cognition seems most relevant for accuracy perceptions, the opinion toward the news topic seems most crucial for whether people believe in the news and share it online.”

Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. (Maertens et al. 2020)

“In 3 experiments (N \(=\) 151, N \(=\) 194, N \(=\) 170), participants played either Bad News (inoculation group) or Tetris (gamified control group) and rated the reliability of news headlines that either used a misinformation technique or not. We found that participants rate fake news as significantly less reliable after the intervention. In Experiment 1, we assessed participants at regular intervals to explore the longevity of this effect and found that the inoculation effect remains stable for at least 3 months. In Experiment 2, we sought to replicate these findings without regular testing and found significant decay over a 2-month time period so that the long-term inoculation effect was no longer significant. In Experiment 3, we replicated the inoculation effect and investigated whether long-term effects could be due to item-response memorization or the fake-to-real ratio of items presented, but found that this is not the case.”

8.2 Case Studies: Threats and Actors

Below one may find a collection of exposed current threats and actors utilizing manipulations of varying sophistication.

The QAnon Conspiracy Theory: A Security Threat in the Making? (Combating Terrorism Center Amarnath Amarasingam, Marc-André Argentino 2021)

Analysis of QAnon shows that, while disorganized, it is a significant public security threat due to historical violent events associated with the group, ease of access for recruitment, and a “crowdsourcing” effect wherein followers “take interpretation and action into their own hands.” The origins and development of the group are analyzed, including review of adjacent organizations such as Omega Kingdom Ministries. Followers’ behavior is analyzed and five criminal cases are reviewed.

Political Deepfake Videos Misinform the Public, But No More than Other Fake Media (Barari et al. 2021)

“We demonstrate that political misinformation in the form of videos synthesized by deep learning (“deepfakes”) can convince the American public of scandals that never occurred at alarming rates – nearly 50% of a representative sample – but no more so than equivalent misinformation conveyed through existing news formats like textual headlines or audio recordings. Similarly, we confirm that motivated reasoning about the deepfake target’s identity (e.g., partisanship or gender) plays a key role in facilitating persuasion, but, again, no more so than via existing news formats. ...Finally, a series of randomized interventions reveal that brief but specific informational treatments about deepfakes only sometimes attenuate deepfakes’ effects and in relatively small scale. Above all else, broad literacy in politics and digital technology most strongly increases discernment between deepfakes and authentic videos of political elites.”

Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news (Vaccari and Chadwick 2020)

An experiment using deepfakes to measure its value as a deception tool and its impact on public trust: “While we do not find evidence that deceptive political deepfakes misled our participants, they left many of them uncertain about the truthfulness of their content. And, in turn, we show that uncertainty of this kind results in lower levels of trust in news on social media.”

Coordinating a Multi-Platform Disinformation Campaign: Internet Research Agency Activity on Three US Social Media Platforms, 2015–2017 (Lukito 2020)

An analysis of the use of multiple modes as a technique to develop and deploy information attacks: [Abstract] “The following study explores IRA activity on three social media platforms, Facebook, Twitter, and Reddit, to understand how activities on these sites were temporally coordinated. Using a VAR analysis with Granger Causality tests, results show that IRA Reddit activity granger caused IRA Twitter activity within a one-week lag. One explanation may be that the Internet Research Agency is trial ballooning on one platform (i.e., Reddit) to figure out which messages are optimal to distribute on other social media (i.e., Twitter).”

The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them (Zerback et al. 2021)

This paper examines online astroturfing as part of Russian electoral disinformation efforts, finding that online astroturfing was able to change opinions and increase uncertainty even when the audience was “inoculated” against disinformation prior to exposure.

8.3 Dynamics of Exploitative Activities

Below are some excerpts from literature covering the dynamics and kinetics of IO activities and their relationships to disinformation.

Cultural Convergence: Insights into the behavior of misinformation networks on Twitter (McQuillan et al. 2020)

“We use network mapping to detect accounts creating content surrounding COVID-19, then Latent Dirichlet Allocation to extract topics, and bridging centrality to identify topical and non-topical bridges, before examining the distribution of each topic and bridge over time and applying Jensen-Shannon divergence of topic distributions to show communities that are converging in their topical narratives.” The primary topic under discussion within the data was COVID-19, and this report includes a detailed analysis of “cultural bridging” activities among/between communities with conspiratorial views of the pandemic.

An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: Bridgegate, Pizzagate and storytelling on the web (Tangherlini et al. 2020)

“Predicating our work on narrative theory, we present an automated pipeline for the discovery and description of the generative narrative frameworks of conspiracy theories that circulate on social media, and actual conspiracies reported in the news media. We base this work on two separate comprehensive repositories of blog posts and news articles describing the well-known conspiracy theory Pizzagate from 2016, and the New Jersey political conspiracy Bridgegate from 2013. ...We show how the Pizzagate framework relies on the conspiracy theorists’ interpretation of “hidden knowledge” to link otherwise unlinked domains of human interaction, and hypothesize that this multi-domain focus is an important feature of conspiracy theories. We contrast this to the single domain focus of an actual conspiracy. While Pizzagate relies on the alignment of multiple domains, Bridgegate remains firmly rooted in the single domain of New Jersey politics. We hypothesize that the narrative framework of a conspiracy theory might stabilize quickly in contrast to the narrative framework of an actual conspiracy, which might develop more slowly as revelations come to light.”

Fake news early detection: A theory-driven model (Zhou et al. 2020)

“The method investigates news content at various levels: lexicon-level, syntax-level, semantic-level, and discourse-level. We represent news at each level, relying on well-established theories in social and forensic psychology. Fake news detection is then conducted within a supervised machine learning framework. As an interdisciplinary research, our work explores potential fake news patterns, enhances the interpretability in fake news feature engineering, and studies the relationships among fake news, deception/disinformation, and clickbaits. Experiments conducted on two real-world datasets indicate the proposed method can outperform the state-of-the-art and enable fake news early detection when there is limited content information”

“...Among content-based models, we observe that (3) the proposed model performs comparatively well in predicting fake news with limited prior knowledge. We also observe that (4) similar to deception, fake news differs in content style, quality, and sentiment from the truth, while it carries similar levels of cognitive and perceptual information compared to the truth. (5) Similar to clickbaits, fake news headlines present higher sensationalism and lower newsworthiness while their readability characteristics are complex and difficult to be directly concluded. In addition, fake news (6) is often matched with shorter words and longer sentences.”

A survey of fake news: Fundamental theories, detection methods, and opportunities (Zhou and Zafarani 2020)

“By involving dissemination information (i.e., social context) in fake news detection, propagation-based methods are more robust against writing style manipulation by malicious entities. However, propagation-based fake news detection is inefficient for fake news early detection ... as it is difficult for propagation-based models to detect fake news before it has been disseminated, or to perform well when limited news dissemination information is available. Furthermore, mining news propagation and news writing style allow one to assess news intention. As discussed, the intuition is that (1) news created with a malicious intent, that is, to mislead and deceive the public, aims to be “more persuasive” compared to those not having such aims, and (2) malicious users often play a part in the propagation of fake news to enhance its social influence.” This paper also emphasizes the need for explainable detection as a matter of public interest.

A Signal Detection Approach to Understanding the Identification of Fake News (Batailler et al. 2020)

“The current article discusses the value of Signal Detection Theory (SDT) in disentangling two distinct aspects in the identification of fake news: (1) ability to accurately distinguish between real news and fake news and (2) response biases to judge news as real versus fake regardless of news veracity. The value of SDT for understanding the determinants of fake news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news.”

Sharing of fake news on social media: Application of the honeycomb framework and the third-person effect hypothesis (Talwar et al. 2020)

This publication uses the “honeycomb framework” of Kietzmann, Hermkens, McCarthy, and Silvestre (Kietzmann et al. 2011) to explain sharing behaviors related to fake news. Qualitative evaluation supports the explanatory value of the honeycomb framework, and the authors are able to identify a number of quantitative associations between demographics and sharing characteristics.

Modeling echo chambers and polarization dynamics in social networks (Baumann et al. 2020)

“We propose a model that introduces the dynamics of radicalization, as a reinforcing mechanism driving the evolution to extreme opinions from moderate initial conditions. Inspired by empirical findings on social interaction dynamics, we consider agents characterized by heterogeneous activities and homophily. We show that the transition between a global consensus and emerging radicalized states is mostly governed by social influence and by the controversialness of the topic discussed. Compared with empirical data of polarized debates on Twitter, the model qualitatively reproduces the observed relation between users’ engagement and opinions, as well as opinion segregation in the interaction network. Our findings shed light on the mechanisms that may lie at the core of the emergence of echo chambers and polarization in social media.”

(Mis)representing Ideology on Twitter: How Social Influence Shapes Online Political Expression (Guess 2021)

In comparing users’ tweets as well as those of the accounts that they follow, alongside self-reported political affiliations from surveys, a distinction between self-reported political slant and those views expressed in tweets. This raises the notion that public political expression may be distinct from individual affiliations, and that an individual’s tweets may not be an accurate proxy for their beliefs. The authors call for a re-examination of potential social causes of/influences upon public political expression: [from Abstract] “we find evidence consistent with the hypothesis that users’ public expression is powerfully shaped by their followers, independent of the political ideology they report identifying with in attitudinal surveys. Finally, we find that users’ ideological expression is more extreme when they perceive their Twitter networks to be relatively like-minded.”

8.4 Meta-Review

A modern review on the literature surrounding disinformation may be found below.

A systematic literature review on disinformation: Toward a unified taxonomical framework (Kapantai et al. 2021)

“Our online information landscape is characterized by a variety of different types of false information. There is no commonly agreed typology framework, specific categorization criteria, and explicit definitions as a basis to assist the further investigation of the area. Our work is focused on filling this need. Our contribution is twofold. First, we collect the various implicit and explicit disinformation typologies proposed by scholars. We consolidate the findings following certain design principles to articulate an all-inclusive disinformation typology. Second, we propose three independent dimensions with controlled values per dimension as categorization criteria for all types of disinformation. The taxonomy can promote and support further multidisciplinary research to analyze the special characteristics of the identified disinformation types.”

9 Conclusion

While media forensics as a field is generally occupied by engineers and computer scientists, it intersects with a variety of other disciplines, and its importance is crucial to the ability for people to form consensus and collaboratively solve problems, from climate change to global health.

John Locke, an English philosopher and influential thinker on empiricism and the Enlightenment, once wrote that “the improvement of understanding is for two ends: first, our own increase of knowledge; secondly, to enable us to deliver that knowledge to others.” Indeed, your study of media forensics takes Locke’s formulation a step further. The effort you put in to understanding this field will not only increase your own knowledge, it will allow you to build an information ecosystem that delivers that knowledge to others—the knowledge necessary to fuel human discourse.

The work is urgent.