Skip to main content

Dupery by Design: The Epistemology of Deceit in a Postdigital Era

In medieval Europe, most people were formally illiterate. Scribes composed letters with ink and quills, and merchants, pilgrims or messengers often used slow, arduous, expensive travel over great distances to deliver messages. Letter delivery required fresh horses, good weather for roads and seas to remain passable, and some luck in not coming up against bandits, interceptors, illness or injury. After many weeks or months, when the letter reached its destination, it was likely read by someone who could read. Oral messages, the most commonly used method of communication, demanded good listening skills and a good memory to relay the information accurately. As with written messages, the sender had to rely on the messenger to exercise discretion and integrity not to divulge the information to anyone but the recipient, or to distort the message or lie about its content: medieval communication meant that information had to be entrusted to trustworthy messengers.

Contrast with the present day: we have near universal literacy and depend on digital literacy for instant communication. The advancement of digital technology and online environments has increased not only the speed at which we can create information, but also the speed at which we can spread it. Keyboards and computers, for example, are technological tools that have allowed us to write digitally and expeditiously. Software is the means for creating pictures, photos, memes and many other digital messages. Messages that may have taken months to be delivered in medieval times can now travel across the globe in seconds. Anyone with network connectivity can access information: we can ‘just google it’ if we have a query about the geological composition of the Moon or the World Health Organization’s advice on containing epidemics. Our information networks are such that it requires little skill, time or money to send or receive information, and they have accelerated the ease and efficiency with which we access a vast array of information. Digital technologies have not only engendered reliance on technology for accessing (un) reliable and trusted information, they have also extended well beyond the boundaries of online environments and digital devices to connect humanity.

With the Good of Digital Technology Comes the Harm

Digital technology does not, however, operate entirely independently and is not, therefore, free from human influence (Bhatt and MacKenzie 2019). As we so well know, those same technologies that connect us epistemologically and disseminate information so quickly and efficiently are just as ferociously efficient at creating and spreading misinformation, disinformation and malinformation.Footnote 1 Creators and senders of information online are enabled by the affordances of digital technologies to creatively design fraudulent, false, and fake news and information with the end-goal of deceiving online recipients about a particular subject and manipulating them to act in desired ways. Augmented by algorithms that amplify content based on our latent and inherent biases, information disorders spread by human error or design also quickly spread far and wide, with little opportunity for anyone to redress their online error. An example is that of the journalist Natasha Fatah whose error in reporting the terrorist attacker in Toronto in April 2018 proved to be an unwitting kind of experiment: her incorrect tweet describing the attacker as ‘wide eyed, angry and Middle Eastern’ received far more engagement than her later corrected tweet describing him as ‘white’ (Meserole 2018).Footnote 2 This illustrates how, on Twitter, users’ attention can be directed towards specific content without checks on credibility or veracity.

Notably, capitalist and political agendas lie behind the escalation of deceit online, though this should be no surprise since manipulation of desire and beliefs, the pursuit of the right image for public consumption, is hardly new. We know that institutions and their representatives will go to extravagant lengths to conceal or distort the truth in order to obtain desired ends or maintain secrecy: lies are justified tools whether on- or off-line. We live in an era of what is often casually called ‘post-truth’, namely that there is no truth. Any so-called claim or fact can be called ‘truthful’ provided we believe them to be true and that they accord with our sentiments. Feelings are better than reasons, and are manipulated in ways that reasons are not, and so render reasons unnecessary.

Additionally, in conducting social, economic and political business across the digital world, people are often rewarded for dishonesty, insincerity, fraud, and, ultimately, deception (Pettit 2013). This pervasive chicanery has spread through online mass markets intent on engendering false beliefs and hopes in consumers. We seem to be in an environment which can seduce people into having or maintaining false beliefs with such swift stealth that the power to deceive goes unchecked.

The Threat to Truth and Trust

In medieval times, we had to rely on and trust the messenger. Present-day digital technologies mean that openness and transparency are possible on a scale that was not possible in previous eras: we can call almost anyone to account, quickly discredit false claims, fact check, and expose lies, deceit, bullshit and fake news worldwide in seconds. Yet despite this power, we acquire false beliefs, largely because we work under circumstances, within online filter bubbles, and with habits of mind, that allow us neither the time nor the inclination to go searching for relevant facts or correct information among the oceans of information the Internet offers us. Epistemologically, our biases incline us to react to information that confirms those biases (Bhatt and MacKenzie 2019).

Digital technologies are animated by economic and political motivations that ought to make us question whom we can rely on as a trusted source of information. Today, when social media platforms have been weaponised (see Brooking and Singer 2016), how can we tell which claims and counterclaims, reports and facts, evidence and apparent evidence are trustworthy when we can become engulfed in super-abundant and contradictory information? Whom do we trust when it can be so hard to distinguish fact from fiction, rumour from report, a reliable source from malignant muck-raker, a truth-teller from deceiver? Is there something within the design and infrastructure of digital platforms, such as deep learning algorithms, which is making this problem far worse in our current times? These are the dilemmas we confront when openness and transparency diminish rather than nourish our confidence in trust and truth. As Onora O’Neill reminds us in her BBC Reith Lectures on Trust in 2002Footnote 3:

deception is the enemy of trust; deceivers do not treat others as moral equals so-called information ‘products’ can be transmitted, reformatted and adjusted, embroidered and elaborated, shaped and spun, repeated and respun; it can be quite hard to assess truth or falsehood.

This is undoubtedly a dismal state of affairs. Confronted with a plethora of contradictory information, the continual revelations of lies, chicanery and deceit, online users (the public more generally) may not only come to distrust most sources of information, they may also resile from bothering with the truth. And when truth disappears from public life, democratic stability is threatened. However, it is important to be reminded that, regardless of the deceiver’s elaborations, embroidery or spinning, a fact can be safely removed from the (digital) world ‘only if enough people believe in its nonexistence’ (Arendt 1971). It can be done, Arendt observed ‘only through radical destruction’ and there has ‘never existed on any level of government such a will to wholesale destruction, lies destined for public consumption’, not even under the Nazis or Soviet Communism. Deceit, and its family members of lies, bullshit and fakery, is efficient only to the extent that the deceiver knows the truth and can hide it from view (MacKenzie and Bhatt 2020b). Nevertheless, this does not obviate the point that discerning the truth is difficult and that we operate in defactualised, post-truth digital environments. In her essay on Lying in Politics (1971), Arendt defined defactualisation as the inability to discern fact from fantasy, and deliberate falsehoods as those which are concerned with contingent facts that carry no inherent truth: a fact can be true or not as and when expediency requires. Factual truths can be torn apart by lies, by the organised lying of organisations, covered up by calculated falsehoods, and/or allowed to disappear into near oblivion by mechanisms such as algorithms which respond more readily to retweets and mentions than to content that does not.

The Harms of Information Disorders Should Not Silence Free Speech

Concern about information disorders and the harm to trust and truth has raised questions about how to control or limit free speech and forced us to better understand how digital literacy emerges in the lives of everyday users of technologies (Bhatt and MacKenzie 2019Footnote 4). But while online free speech, a free press, and near unfettered access to open and transparent information is fraught with the dangers of deceit, information disorders and [mis] information overload, silencing these channels is not the answer. John Stuart Mill is often invoked by those claiming that free speech or opinion is an unconditional good that should in no way be curtailed. Mill was very sure that all silencing of an opinion was an ‘assumption of infallibility’:

the peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error. (Mill 2006: 32)

The ‘dissentian worlds’ of other people should be seen as a standing invitation to be ‘corrected before the whole world’. If we are to cultivate our understanding, then we need to learn the grounds of our opinions and to be able to defend them against common objections. The only rational means by which we can justify our beliefs and assure ourselves that we are right is by having the freedom to have these contradicted or disproven by others who have contrary views or experiences. Errors, Mills argued ‘are corrigible’ (27): that is, mistakes can be rectified by discussion and experience, ‘despite there being few who may be capable of judging truth’. The habit of subjecting one’s opinions to scrutiny is the only stable foundation for relying on the truth of it, and only then has a person a right to think her ‘judgment better than that of any person, or any multitude, who have not gone through a similar process’ (27). The further value in subjecting one’s views to fearless scrutiny and critique was that it would be held as ‘a living truth’ rather than ‘dead dogma’ or yet another superstition (42),

assuming that the true opinion abides in the mind, but abides as a prejudice, a belief independent of, and proof against, argument—this is not the way in which truth ought to be held by a rational being. This is not knowing the truth. Truth, thus held, is but one superstition the more, accidentally clinging to the words which enunciate a truth. (42–43)

Mill’s confidence may be misplaced or naïve given the force and reach of modern digital technology. Nevertheless, sustained and principled exposure to diversity of opinion offers us our best chance of obtaining truth (an exposure that is not, perhaps, suited to online environments). It is always possible that a dissenter from established opinion could have something valuable to say and so obstructing free speech would be a loss to truth.

Arendt (1971) asserted that facts need testimony to be remembered and trustworthy witnesses to be established in order to be secure. Factual statements, however true or credible, however often they are tested against public opinion, can rarely be beyond doubt, certainly not secure against attack. The deceiver knows that reality can be questioned and that a version of the truth may be more appealing than the reality itself, and appeal to sentiment rather than to reason (MacKenzie and Bhatt 2020a). One would think that the easy availability of information should work to contradict and so undermine the ‘truth’ of false claims but we have a tendency to be passive before overwhelming information or when informants we trust, respect or admire speak; we often grant too much credibility to the speaker or the source.

All freedom of opinion becomes a ‘cruel hoax’ (Arendt 1971) if access to unmanipulated factual information is impeded. Digital technologies, and social media platforms in particular, create both new norms for language and discourse, and enable new forms of power and inequality to exist, casting doubts about technologically deterministic accounts of technology’s relationship with society. To what extent is the design and infrastructure of digital platforms an enabler in these current problems? If dupery is not always ‘by design’ in that humans can and often do mistakenly spread misinformation (see Meserole 2018), then is the very design of a social media platform to be implicated in the current problems we face? Information disorders are highly complex phenomena that algorithmic tweaks are unlikely to solve. Legislation that fines platforms for hosting unlawful content could work, along with factchecking by the platforms or third parties, quality trending topics and news feeds, high-quality engineering resources, and strong editors. These are some basic if complex technological solutions. Non-technological solutions include credibility scoring, whitelisting and blacklisting.

However, as Martin Moore, Director of the Center for the Study of Media, Communication and Power, argued in his submission of evidence to the UK Parliamentary Select Committee on Fake News, ‘the long history of fake news, the political, social and economic motivations for producing it … mean that technology will only ever partly address the problem’ (2017: 11). Engineers and platform owners make value-driven choices to determine which news to promote and which to suppress, assuming they can solve, or even alleviate, the problem:

The technology platforms on which this news travels are reliant on advertising that prioritises popular and engaging content that is shared widely. The content is not distinguished by its trustworthiness, authority or public interest, since these are not criteria that drive likes and shares. (Moore 2017: 11–12)

News and digital literacy programmes, critical research skills, and education on the power of images to skew opinion and feeling, are essential. The UK House of Commons Committee for Digital Culture, Media and Sport Committee, has stated that digital literacy should be the ‘fourth pillar’ of education along with reading, writing and maths (2019: 350). The primary features of such an education could include: (i) social media verification skills; (ii) how algorithms prioritise information and sites, along with curation functions; (iii) techniques for developing emotional scepticism to override our tendency to be less critical of content that provokes an emotional response; and (iv) statistical numeracy.


The Internet operates at a scale and speed unprecedented in history. It brings many freedoms to people across the world, provides access to incredibly diverse voices, materials, ideas and resources, and has given us phenomenal power to communicate. It also brings many harms, including the power to ‘hijack our minds and society’ (Harris cited in House of Commons Report 2019: 6). Education on this remarkable resource is necessary to ensure that our minds and society are free from the baneful effects of information disorders, thereby safeguarding trust, truth and integrity—and our democracies.

In the spirit of understanding how deceit infiltrates our belief systems, we suggest that if we can understand the workings of digital technologies and how they are used in deception, then we can increase our epistemic volition, reduce our susceptibility to deceit and increase our focus on truth.


  1. The Council of Europe (Wardle and Derakhshan 2017: 5) refers to these as ‘information disorders’. Misinformation occurs when false information is shared but no harm is intended; disinformation is when false information is knowingly shared and is intended to cause harm; and malinformation is when real information is deliberately shared with the intention of causing harm.

  2. The graph showing how much more engagement the erroneous tweet received than the correct one can be seen here: Accessed 13 March 2020.

  3. The talks are available on BBC Radio 4 here:, accessed 17 March 2020. The Reith Lectures are given annually by a distinguished thinker and broadcast on BBC Radio 4. The Reith Lectures were inaugurated in 1948 to commemorate Lord Reith, the first Director General of the BBC, for his services to public broadcasting. Lecturers have included Edward Said (1993), Anthony Giddens (1999), Michael Sandel (2009), Aung San Suu Kyi (2011) and Stephen Hawking (2016).

  4. See the 2020 Special Issue of Postdigital Science and Education, PDSE 2(1), which explores these issues.


Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Alison MacKenzie.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

MacKenzie, A., Rose, J. & Bhatt, I. Dupery by Design: The Epistemology of Deceit in a Postdigital Era. Postdigit Sci Educ 3, 693–699 (2021).

Download citation

  • Published:

  • Issue Date:

  • DOI: