1 Introduction

Over the past two decades, authoritarianism has been steadily on the rise while liberal democracy has been in global decline (Repucci & Slipowitz, 2022). During the same period, there has been a rapid development and widespread proliferation of digital technologies. Multiple experts suggest that these seemingly independent trends are causally connected (e.g. Deibert, 2015; Dragu and Lupu 2021; Lamensch, 2021; Weiss, 2020). They argue that the swift expansion of digital technology is facilitating the growth of authoritarianism, a perspective that sharply contrasts with the once common belief that technologies such as the internet would enhance free speech and democracy.Footnote 1 These experts typically refer to this dynamic as digital authoritarianism (DA) or, alternatively, as digital repression.Footnote 2

Commentators broadly describe DA as “the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations” (Polyakova & Meserole, 2019, 1; for similar definitions see Section 2 of this paper). The way the Chinese Communist Party has leveraged facial recognition software to surveil China’s Uighur population would on this definition be a clearcut case of DA (as documented by Shahbaz, 2018; Lamensch, 2021; Polyakova & Meserole, 2019). The problem with this definition – let’s term it the intention-based definition – is that it presupposes that DA involves politically repressive agents intentionally exploiting digital technologies in their pursuit of authoritarian ends. This is problematic because we find these same experts discussing, under the umbrella of DA, ways that digital technologies systematically foster authoritarianism without any politically repressive agents intentionally causing these effects. For instance, Steven Feldstein recounts how in Norway, during the COVID-19 pandemic, democratic leaders implemented an invasive contact-tracing app, one that constituted a quasi-authoritarian system of surveillance (Feldstein, 2021, 278). Yet all the available evidence suggests that the guiding intention of the Norwegian government was to ensure public health. For example, the government's benign intentions seem to be largely confirmed by the fact that it swiftly deactivated the app following the release of an Amnesty International report that identified serious privacy issues with its contact-tracing technology.

This paper explores the tension between the intention-based definition and current usage of the term DA. I argue that the intention-based definition is, as it stands, untenable and requires revision. Far from being an empty academic enterprise, I emphasize why redefining DA promises to yield real practical advantages. Perhaps most obviously, because those seeking to combat DA must have clarity about the precise nature of what it is they stand opposed to. Moreover, because the intention-based definition is excessively narrow, it can lead commentators to overlook potentially severe forms of DA.

In Section 2, I commence by surveying existing definitions of DA, underscoring their consistent requirement that a politically repressive agent be intentionally misusing digital technologies for authoritarian purposes. In essence, these definitions all turn out to be iterations of the intention-based definition. In Section 3, I present four counterexamples to this definition: benign surveillance; digital sovereignty; attention-harvesting algorithms; and tech-induced loneliness. In each case, we bear witness to authoritarianism being promoted by digital technologies without any evidence of this being intentionally caused by politically repressive agents. Notably, the first three of these counterexamples are drawn from expert discussions of DA. The fourth counterexample – tech-induced loneliness – is structurally analogous to the first three, and therefore qualifies as DA, though is not identified as such by experts. Based on these observations, I contend that the intention-based definition is underinclusive and is therefore unsustainable. Section 4 then outlines an improved definition of DA – what I call the promotion-based definition. Since this more expansive definition does not posit intentional, politically repressive agency as a precondition of DA, it can accommodate the counterexamples discussed in Section 3. Further, it enables us to catch a broader spectrum of cases of DA, such as tech-induced loneliness, which those adhering to the intention-based definition are prone to overlook. After outlining some further practical benefits of the promotion-based definition, I argue that we still need to distinguish between intentional and unintentional forms of DA since they call for distinct types of remedial action.

2 The Intention-Based Definition

Let us begin by examining some existing definitions of DA. As mentioned above, Alina Polyakova and Chris Meserole (2019, 1) describe DA as “the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations.” This definition has been influential and is cited by multiple other commentators (see, e.g., Jones, 2022, 2; Dictionary of Populism, n.d.).Footnote 3 On this definition, DA occurs when authoritarian regimes intentionally exploit digital technologies to repressively control their citizens. One issue this immediately raises is that authoritarian practices are often present in hybrid or even democratic regimes, which cannot be unequivocally labeled authoritarian (Glasius, 2018). Erol Yayboke and Samuel Brannen (2020, 2) appear to take this into account when they define DA “as the use of the internet and related digital technologies by leaders with authoritarian tendencies to decrease trust in public institutions, increase social and political control, and/or undermine civil liberties.” Likewise, Steven Feldstein (2021, 25) advances a definition of digital repression (synonymous with DA) that acknowledges that the practice can occur in regimes that are not strictly classified as authoritarian: “I define digital repression as the use of information and communications technology to surveil, coerce, or manipulate individuals or groups in order to deter specific activities or beliefs that challenge the state” (original emphasis).Footnote 4 For Feldstein, as for Yayboke and Brannen, then, DA can coherently occur in democracies. This is a significant improvement on the definition offered by Polyakova and Meserole, because some of the most vigorous practitioners of DA, such as Putin’s government in Russia (Lamensch, 2021), are regimes that cannot straightforwardly be categorized as authoritarian. But whether or not these authors confine DA to authoritarian regimes, they all similarly describe it as the intentional use (or rather abuse) of digital technologies by politically repressive agents, which is why I refer to this characterization of DA as the intention-based definition.

Before we evaluate this definition, however, we first need to clarify the concept of authoritarianism, since it carries diverse connotations. Some (e.g., Svolik, 2012, 22–23) classify an authoritarian regime as one that simply fails to meet either of the following two criteria of democracy: (a) having “free and competitive elections” and (b) having “an executive that is elected either directly in free and competitive presidential elections or indirectly by a legislature in parliamentary systems.” On this basis, Milan Svolik (2012, 22–23) considers the terms authoritarian regime and dictatorship to be interchangeable. However, I will adopt a more expansive understanding of the term authoritarian, according to which it denotes any practice or web of practices that foster the conditions of dictatorship just mentioned. This aligns with the Oxford English Dictionary (2023) definition of the adjective authoritarian as anything or anyone “[f]avourable to or characterized by obedience to authority as opposed to personal liberty; strict, dictatorial.” Consequently, a practice might qualify as authoritarian if it undermines freedom of speech, suppresses political pluralism and civil liberties, obstructs accountability, misinforms, or threatens citizens in a manner compromising their capacity to vote, and so forth. Even within a democracy, a practice that significantly undermines any of these pillars of democratic liberty can be considered favorable to dictatorship and, to that extent, authoritarian. Political agents, such as individual leaders or regimes, would then qualify as authoritarian if they systematically engage in such practices.

3 Counterexamples to the Intention-Based Definition

But does the intention-based definition withstand scrutiny? In this section, I contend that it does not, and to demonstrate this, I present four cases of DA that the intention-based definition is unable to accommodate. The first three of these are discussed by multiple experts under the heading of DA, even though they diverge from the intention-based definition explicitly advocated by some of these very same experts. The fourth counterexample is isomorphic with the other three, and I argue that it can therefore be categorized as a case of DA, even though it again diverges from the intention-based definition.

3.1 Benign Surveillance and the Chilling Effect

The most compelling counterexample is that of benign surveillance, particularly instances where governments implement digital systems to monitor citizens with the aim of ensuring public health. Despite these noble intentions, however, such surveillance systems can end up restricting citizens’ de facto liberties in a manner that qualifies as authoritarian. In an article scrutinizing how the pandemic has fueled the spread of DA, Lydia Khalil (2020, 28) sheds light on how “[m]any democracies have accepted new infringements on privacy, bypassing the usual legislative processes of scrutiny and consideration in the interests of pandemic mitigation.” In a similar vein, Steven Feldstein discusses how during the COVID-19 pandemic, the governments of Norway, Bahrain and Kuwait rolled out contact-tracing apps that violated citizens’ privacy. At the time, Claudio Guarnieri, Head of Amnesty International’s Security Lab, claimed that in deploying these apps, these governments ran “roughshod over people’s privacy, with highly invasive surveillance tools which go far beyond what is justified in efforts to tackle COVID-19” (Amnesty International, 2020; see also Sadowski, 2020). Feldstein (2021, 278) presents this as an example of the way in which “governments are implementing new surveillance techniques in a rushed and ad hoc manner.” He further notes that soon after the publication of Amnesty’s report, Norway withdrew the offending app. However, the context of this example in Feldstein’s latest book, The Rise of Digital Repression, strongly indicates that he considers the Norway case an example of DA, even though it would appear to starkly contradict the intention-based definition. After all, the government of Norway ostensibly introduced its contact-tracing app with the aim of safeguarding public health. If we take the Norwegian government at its word, there was no politically repressive intention behind these policies.

One could argue that such trust is simply naïve. This view seems to be justified in the case of Bahrain, which is commonly recognized as an authoritarian regime.Footnote 5 But even if we are right to suspect the Bahraini government of pursuing repressive ulterior motives with its contact-tracing app, we nonetheless cannot be certain that the regime’s intention was repressive as opposed to protective. And yet ideally we still want to be able to designate this case as an instance of DA, that is, without having to conclusively establish that the regime intentionally designed its contact-tracing technology to repress Bahraini citizens.

In contrast to Bahrain, we have significantly less reason to question the sincerity of the Norwegian government, given its robust democratic credentials.Footnote 6 This is further supported by the fact that the Norwegian government deactivated the app following the publication of Amnesty’s report. But why might we want to consider such examples – that is, where the operative intentions are murky (as in the case of Bahrain), or convincingly benign (as in the case of Norway) – as instances of DA?

Firstly, we can argue that such surveillance systems promote authoritarianism by laying its foundations. As Rob Kitchin (2020, 371) warns, “[t]he fine-grained mass tracking of movement, proximity to others, and knowledge of some form of status (beyond health, for example) will enable tighter forms of control.” These surveillance systems provide the infrastructure necessary for effective authoritarianism. Constructing such infrastructure leaves people vulnerable to future leaders that might have authoritarian aspirations. In this scenario, the relevant danger still dependends on the politically repressive intentions of potential authoritarian agents downstream. However, it is evident that commentators like Feldstein want to categorize and criticize these systems under the rubric of DA, irrespective of whether such hypothetical agents actually materialize. Thus, such surveillance – even when implemented by benign politicians – can be considered DA because it renders people susceptible to potential authoritarian agents, thereby inadvertently fostering authoritarianism.

But we might also consider such surveillance systems as cases of DA on account of the fact that even when motivated by benevolently protective concerns, they nonetheless generate politically repressive effects. As Kitchin (2020, 371) observes, aside from enabling authoritarianism, excessive surveillance is also “likely to have a chilling effect on protest and democracy.” The rationale for this claim runs as follows: when people are subjected to surveillance, they self-discipline and self-censor out of fear of potentially being punished by a hypothetical authoritarian agent. The concern is that a possibly existing, or possibly forthcoming authoritarian political agent could potentially access the data obtained by means of such surveillance and punish citizens for behavior the agent deems politically subversive. This phenomenon is known as the chilling effect, or alternatively, panopticism, referring to Foucault’s account of the panopticon in Discipline and Punish (Manokha, 2018).

Empirical research has substantiated the idea that surveillance, or even perceived surveillance, tends to cause this chilling effect. Researchers also theorize that this effect is likely generated by contact-tracing apps (Kitchin, 2020; Rowe, 2020), especially among immigrant members of the population, who frequently seek to blend in for fear of jeopardizing their citizenship status.Footnote 7 Those using contact-tracing apps might, for example, avoid visiting gay bars, participating in political protests, or attending the meetings of dissenting political groups, even if these activities are at present perfectly legal. In such cases, even if the surveillance is motivated by liberal, democratic and benign political concerns, the de facto impact on individual liberty is much the same as it would be if the surveillance were intentionally implemented for politically repressive ends. Given that many people are likely to behave as if such surveillance is monitored by an authoritarian agent, there are compelling grounds for considering intensive contact-tracing apps as a form of DA. Without any intentionally repressive agency, these apps foster authoritarianism by eroding political pluralism, freedom of expression, and citizens’ liberty to pursue their individual conceptions of the good life. From this standpoint, Norway’s use of contact-tracing technology would qualify as DA even during the short time that it was deployed, and despite the (presumably) benign intentions of those who deployed it.

The same chilling effect, where people self-censor and conform to hegemonic norms, can also be elicited by individuals’ fear of peer surveillance (see Manokha, 2018). Social media has enabled the close monitoring of an individual’s life by their peers, and the pervasive presence of smartphones increases the risk of one’s actions being recorded and publicized. Consequently, there is now a greater danger that one’s actions will be ridiculed or criticized by one’s peers and possibly even the wider community, all of which serves to intensify self-censorship. Once again, the point is that digital technologies are eroding freedom of speech, suppressing political pluralism and obstructing civil liberties in a manner that qualifies as authoritarian, and they are doing so without the intentional involvement of any politically repressive agents.

3.2 Digital Sovereignty

Our second counterexample is that of digital sovereignty, where “each government [imposes] its own internet regulations in a manner that restricts the flow of information across national borders” (Shahbaz & Funk, 2020). An illustrative example of this is the Chinese Communist Party’s prohibition of Instagram, WhatsApp, Gmail, and Wikipedia, along with a raft of other apps and websites (for a comprehensive list, see Binns, 2023). This type of legislation is not, however, exclusive to authoritarian regimes, since an increasing number of liberal democracies are now themselves actively pursuing digital sovereignty (Shahbaz & Funk, 2020). A recent example of this was President Trump’s 2020 attempt to ban new downloads of TikTok, purportedly on account of concerns that the Chinese government might exploit the app to acquire extensive personal data from US citizens. At the time, Wilbur Ross wrote of how in using TikTok, “data on locality, data on what you are streaming toward, what your preferences are, what you are referencing, every bit of behavior that the American side is indulging in becomes available to whoever is watching on the other side” (quoted in Swanson et al., 2020). Such surveillance poses a particular threat to any US citizens who regularly travel to China or who have family or business ties with the country. If the Chinese government were to perceive these citizens as engaging in subversive behavior, their freedom could well be endangered. In their critique of digital sovereignty, Shahbaz and Funk (2020) also discuss the 2020 decision of the EU’s Court of Justice to invalidate a major EU–US data-sharing agreement on the grounds that it exposed EU citizens to privacy violations, especially from US intelligence services. Though a new agreement was negotiated in 2022 (European Commission, 2023), the court’s ruling in 2020 created a partial digital blockade between the US and the EU.

Trump’s attempt to restrict new downloads of TikTok, much like the benign surveillance discussed in the preceding section, allegedly aimed to protect US citizens, that is, from the repressive efforts of the Chinese government. The EU’s Court of Justice was likewise ostensibly trying to shield EU citizens from invasive US surveillance that would have contravened their privacy rights. Nonetheless, Shahbaz and Funk (2020) categorize Trump’s and the EU’s actions as instances of DA, stating that “[e]ven when aimed at curbing repressive practices, these actions serve to legitimize the push for each state to oversee its own ‘national internet,’ which was previously championed only by autocratic governments in countries such as China, Iran, and Russia”. Similarly, Yayboke and Brannen (2020, 9) warn against the construction of “digital walls,” “which could be used as examples and excuses by China and other advocates of a more fragmented – and centrally controlled – Internet.”Footnote 8 From this point of view, the quest for digital sovereignty, even when honestly pursued in the name of democratic freedom, falls under the heading of DA because it legitimates the more obviously repressive forms of digital sovereignty imposed by authoritarian or hybrid regimes. While we might suspect Trump of using the idea of digital sovereignty to pursue covert authoritarian ends, the point here is that even if we give him the benefit of the doubt and assume that his reasons were genuine, he would still be guilty of engaging in DA. The implication of these criticisms is that seeking digital sovereignty, even for democratic purposes, amounts to DA because it implicitly endorses, and thereby fosters, certain authoritarian modes of governance.

This claim is open to various objections, perhaps most notably the argument that intolerance toward those who threaten the institutions of democratic liberty is a necessary condition of real-life democracy, and such intolerance does more to fortify democracy than it does to compromise it.Footnote 9 It would on this view be erroneous to characterize this form of intolerance as authoritarian, though the abovementioned critics of digital sovereignty are doing exactly that. However, we can remain agnostic regarding the actual authoritarianism of pursuing digital sovereignty. What matters is that experts consider digital sovereignty under the umbrella of DA, even when they assume it to be motivated by anti-repressive, democratic intentions.

3.3 Attention-Harvesting Algorithms

Another counterexample to the intention-based definition is to be found in discussions of attention-harvesting algorithms, particularly in the context of social media. Social media platforms strategically aim to maximize user engagement in order to expose their users to as much advertising as possible, thereby generating revenue. Lewandowsky et al. (2020, 5) emphasize how this business model is liable to compromise core democratic values: “Curated newsfeeds and automated recommender systems are designed to maximize user attention by satisfying their presumed preferences, which can mean highlighting polarising, misleading, extremist or otherwise problematic content to maximize user engagement.”

While Facebook has defended its algorithms, and claims to be committed to protecting free speech (Horwitz & Seetharaman, 2020), Steven Feldstein (2021, 280) rejects these claims, retorting that “[f]ree speech does not mean that those who shout the loudest and spout the most polarizing rhetoric are the only ones who should be heard.” According to Feldstein, Facebook’s promotion of extreme and polarizing news content played an instrumental role in elevating many authoritarian leaders, including Rodrigo Duterte, into power (160–162). Similarly, Lydia Khalil (2020, 28) criticizes democratic states for allowing “the digital communications sector to develop in a way that has exacerbated polarisation.” She flags this as yet another instance of the “creeping acceptance of digital authoritarianism” within democratic nations.

Another reason Feldstein frames Facebook’s attention-harvesting algorithms as a potential case of DA is that the platform's group suggestion function has a tendency to steer its users into extremist political groups. Feldstein (2021, 271) cites an example taken from Facebook’s own internal research, which established that “64 percent of all extremist group joins are due to [Facebook's] recommendation tools”. Most of these joins were a direct result of Facebook’s “Groups You Should Join and Discover” algorithms. And Facebook themselves conceded that their “recommendation systems grow the problem.”

For Feldstein, the twin impacts of attention-harvesting algorithms – that is, their tendency to polarize and radicalize – are detrimental to the vitality of democracy and conducive to authoritarianism.Footnote 10 It is on these grounds that he treats them as potential instances of DA. However, he explicitly acknowledges that these algorithms are not intentionally designed for politically repressive purposes, noting how “[a]t present, the overriding incentive that Facebook and other platforms follow is revenue and profit … In most cases, if the content increases user engagement, then the algorithm will bump up its visibility” (Feldstein, 2021, 271). Khalil (2020, 28) points to the same profit-oriented intentions when she writes of how weak regulation has allowed “major technology companies to amass huge amounts of information that can be deployed to condition and modify individual behaviour for profit”. Once again, we encounter a situation where a practice is being branded as DA despite being neither developed nor maintained for politically repressive ends, the guiding intention in this case being financial gain.

There are two potential objections to framing attention-harvesting algorithms as unintentionally authoritarian. First, one might contend that this is a case of intentional DA, since extremist groups are deliberately exploiting these algorithms to disseminate polarizing propaganda and recruit new members. While this objection is partially valid, its limitation lies in the fact that these extremists are neither the architects nor the custodians of these algorithms. Additionally, their awareness of these algorithms’ influence is likely only vague. A more accurate way of describing the situation is to say that social media platforms, like Facebook, are using authoritarian content that has always in a certain sense been in circulation and strategically funneling its users toward this content with the aim of maximizing attention capture. It is this funneling process that we are interested in, and it appears to have been engineered with financial profit, as opposed to political repression, in mind.

It is imperative to recognize that tech companies at some point typically become aware of the authoritarian consequences of their algorithms. As we have just seen, Facebook were informed of such effects by research the company itself commissioned (Horwitz & Seetharaman, 2020). If, following such revelations, company executives choose to overlook the anti-democratic impact of these algorithms, then it becomes reasonable to consider such impact as at least partially intentional. Nonetheless, prior to executives being made aware of this impact, these authoritarian algorithms might reasonably be deemed vectors of unintentional DA.

Determining whether and to what extent conglomerates such as Meta deliberately propagate anti-democratic political views is a challenging task. But insisting on definitive proof of intentional involvement before categorizing these trends as instances of DA appears to be overly stringent. Indeed, Feldstein himself seems to be quite willing to treat these instances under the heading of DA, even if this contradicts his endorsement of the intention-based definition.

3.4 Tech-Induced Loneliness

Our final counterexample is tech-induced loneliness. While attention-harvesting algorithms contribute to this phenomenon, it has a range of other potential causes, such as gaming disorder and digital nomadism, and therefore merits therefore separate consideration. In social psychology, loneliness is defined as “a distressing feeling that accompanies the perception that one’s social needs are not being met by the quantity or especially the quality of one’s social relationships” (Hawkley & Cacioppo, 2010, 218). It is vital to differentiate loneliness from isolation, the latter being an objective condition where an individual lacks social connectivity. One can be objectively isolated without experiencing the distress of loneliness. This happens, for example, when one takes pleasure in one’s own company and experiences the positive feeling of solitude. Conversely, individuals can feel lonely in the company of others, particularly if they perceive that company as oppressive or competitive. This section argues that we have good reason to think that the loneliness induced by digital technologies tends to promote authoritarian politics. As such, I submit that we should recognize tech-induced loneliness as a case of DA, even when we are unable to identify a politically repressive agent actively eliciting or exploiting this loneliness.

The idea that loneliness fosters authoritarianism was articulated by Hannah Arendt in The Origins of Totalitarianism. Arendt (1979, 475) describes loneliness as “the common ground for terror, the essence of totalitarian government”. In her view, the modern crisis of loneliness is severe and has “become an everyday experience of the ever-growing masses of our century” (478). While she remains somewhat enigmatic about the root cause of this surge of loneliness, she predominantly attributes this trend to the unique economic conditions of modernity.

The imperative need to travel for employment not only deprives people of a stable political community but also begets a pervasive sense of uprootedness.Footnote 11 When people lack political community, they find themselves shut off from “the trusting and trustworthy company of [their] equals.” Without meaningful social intercourse, individuals are susceptible to feelings of being adrift and uncertain of themselves, making it difficult for them to experience isolation as pleasant solitude, and making it far more likely that they will experience such isolation as painful loneliness. In their search for self-certainty, these individuals are then drawn toward to the organizing logic offered by totalitarian ideologies. These ideologies provide people with an artificially clear understanding of themselves, their role in society, and the overarching order of the world.

Arendt’s analysis is highly speculative, and neglects the more basic point that when individuals feel lonely and are marginalized from traditional community structures – such as family, religious groups, labor unions, and social clubs – they become vulnerable to the allure of fraternity offered by radical political movements.Footnote 12 Nonetheless, empirical evidence supports Arendt’s claims regarding the current severity and prevalence of loneliness and its potential to drive individuals into anti-democratic ideologies. US Surgeon General Vivek Murthy (2023) recently issued a public health warning about the “epidemic of loneliness” presently afflicting the Western world. Although loneliness is difficult to measure, studies indicate that approximately a third of people currently living in industrialized countries suffer from loneliness, with one in twelve being severely affected (Cacioppo & Cacioppo, 2018).

Besides the serious health problems associated with chronic loneliness, including risks equivalent to smoking 15 cigarettes a day (Holt-Lunstad et al., 2017), the condition has been shown to adversely effect democracy and representative government (Murthy, 2023). Voter participation is significantly motivated by a sense of civic or patriotic duty, as well as the belief that voters are likely to be affected by the outcome of elections. However, when individuals experience lonely isolation from their community, they are less likely to feel a civic duty to vote, or to feel as though the outcome of any elections is going to affect them, which results in lower voter turnout (Langenkamp, 2021). In line with this, there is also evidence that when people are embedded in strong social networks, political participation tends to increase (Campbell, 2013).

Given that loneliness undermines citizens’ commitment to democratic politics, it is unsurprising that a US study found a significant association between adults reporting loneliness and an endorsement of right-wing authoritarian views (Floyd, 2017). While this on its own is not enough to establish a causal relationship, other studies suggest that such a link may exist. In one such study, for example, former radicalized US citizens were interviewed about their turn to extremist politics, with a significant proportion of them identifying feelings of loneliness as a key driver of their radicalization (Brown et al., 2021).

Although digital technologies have the capacity to cultivate social connectivity, strong evidence indicates that these technologies also exacerbate loneliness. Murthy (2023) singles them out as a primary cause of the current epidemic of loneliness. Social media and gaming addiction, which can displace time with family and friends, are the typical pathways by which this occurs. Primack et al. (2017), for instance, established a positive correlation between the amount of time spent on social media and self-reported loneliness, and Hunt et al. (2018) found evidence that reducing one's use of social media significantly decreases one's risk of experiencing feelings of loneliness and depression.

Gaming disorder has also been implicated in extreme forms of lonely isolation, perhaps exemplified in the practice of hikikomori, where adolescents separate themselves from their friends, family and society for extended periods, often developing gaming addictions and intense feelings of loneliness (Kato et al., 2020). Lonely isolation is also promoted by attention-harvesting algorithms, which are apt to monopolize time that users might otherwise spend developing social connections with other members of their physical community. Moreover, digital technologies have enabled forms of remote working which have been shown to exacerbate loneliness. For instance, digital nomadism, where people work in a country other than their place of employment (Miguel et al., 2023), and remote working from home. While the latter often affords individuals more time with their family, it is also associated with increased levels of loneliness as individuals struggle to maintain the informal connections with coworkers “that are typically associated with building a sense of belonging” (Dery & Hafermalz, 2016, 109).

The fact that tech-induced loneliness corrodes representative democracy does not in principle mean that it facilitates authoritarianism. In practice, however, this appears to be the case. This hypothesis finds support in the highly effective recruitment strategy employed by the alt-right in the US. Steve Bannon, who led Trump’s 2016 presidential campaign, stated that while working in the internet gaming sector he discovered an army of “rootless white males.” Later, as the executive chairman of the alt-right website Breitbart News, he then deliberately targeted this audience. According to a Cambridge Analytica whistleblower, during Trump’s presidential campaign, Bannon also targeted “incels” (involuntarily celibate men), a demographic known to suffer from lonely social isolation (Sparks et al., 2023). Bannon openly acknowledged how gaming platforms and internet forums primed these lonely groups of men for right-wing authoritarian politics, stating “[y]ou can activate that army. They come in through Gamergate or whatever and then get turned onto politics and Trump” (quoted in Clinton, 2023).

This reveals that not only does excessive gaming and use of social media promote loneliness, but authoritarian propagandists intentionally prey on those who have been rendered lonely by such technologies. Given the compelling reasons to suspect tech-induced loneliness of fostering authoritarianism, it seems appropriate to consider it under the heading of DA. Tech-induced loneliness appears to render individuals vulnerable to authoritarian manipulation in a manner akin to benign surveillance. Importantly, though, tech-induced loneliness is not intentionally propagated by politically repressive agents; rather, it emerges as a byproduct of excessive internet use or gaming, which are designed to be as addictive as possible by agents principally seeking financial gain rather than political control. Crucially, this process facilitates authoritarianism – and so can reasonably be labelled as a form of DA – before any authoritarian recruiters, such as Bannon for example, purposefully attempt to corral the lonely individuals that this process engenders.

4 Redefining DA: A Sketch

The four counterexamples elucidated in the preceding section debunk the intention-based definition of DA. But how might we redefine the notion so that it can accommodate counterexamples of this sort? I suggest that we redescribe DA as any situation where digital technologies systematically promote authoritarian politics. I will refer to this as the promotion-based definition. This section will explain why we should adopt this definition before explaining why we might nonetheless want to retain the intention-based definition as a description of a particular subspecies of DA.

4.1 The Promotion-Based Definition

The first rationale for substituting the intention-based definition with the promotion-based definition is that the latter is descriptively superior to the former. As should already be clear, the more expansive, promotion-based definition more neatly captures current expert usage. Unlike the underinclusive intention-based definition, the promotion-based definition accommodates all of the counterexamples detailed in the previous section. Moreover, it aligns with the Oxford English Dictionary’s present definition of the adjective “authoritarian” as “[f]avourable to or characterized by obedience to authority as opposed to personal liberty” (which was already cited above in Section 2). This definition does not require the active involvement of repressive political agents in order for an object or situation to be deemed authoritarian. A social practice can be authoritarian without anyone having willed it so.Footnote 13

The promotion-based definition is also pragmatically superior to its intention-based counterpart. Theorists who address DA often propose policies aimed at preventing digital technologies from fostering authoritarianism. However, operating within the confines of the intention-based definition hampers their ability to coherently identify and discuss situations where digital technologies systematically promote authoritarianism without the clear input of politically repressive agents – either because there is no such input, or because such input is obscured from view and difficult to prove. While those who campaign against DA are evidently interested in such situations (see e.g., Feldstein, 2021; Gunitsky, 2020; Khalil, 2020), they struggle to integrate them cohesively into their analyses due to their conflicting adherence to the intention-based definition. As a result, these authors often treat these situations peripherally or only as an afterthought.

The promotion-based definition demands that this grey area be given central consideration in any discussion of DA. Adopting this broader range of focus is crucial because combatting intentional forms of DA often requires simultaneously counteracting unintentional forms of DA. For instance, preventing authoritarian online misinformation and recruitment campaigns necessarily involves regulating the design of the algorithms that curate internet users’ group suggestions and newsfeeds. As we have seen, these algorithms are liable to leave citizens vulnerable to the active efforts of authoritarian agents seeking to win popular support. The Bannon example from the previous section aptly illustrates this interconnected dynamic, highlighting the fact that we often need to mitigate unintentional forms of DA in order to effectively combat its related intentional forms.

In addition to enabling critics of DA to devise a coordinated battle plan, grouping intentional and ostensibly unintentional DA under the same conceptual banner is practically beneficial because tackling each of them requires reforming many of the same institutions in many similar ways. For instance, Polyakova and Meserole (2019, 12) suggest that in order to thwart authoritarian misinformation (intentional DA), “governments should invest in raising public awareness around information manipulation. This should include funding educational programs that build digital critical thinking skills among youth.” Thomas et al. (2020) likewise recommend that universities teach digital literacy to mitigate the tech-induced loneliness experienced by freshman students (unintentional DA). And Murthy (2023, 60) suggests that government and educators should counteract loneliness by “[b]uild[ing] social connection into health curricula, including up-to-date, age-appropriate information on the consequences of social connection on physical and mental health, key risk and protective factors, and strategies for increasing social connection.” This would necessarily involve teaching young people how to use digital technologies in a way that avoids elevating their risk of experiencing lonely isolation. Educating school and university students to avoid tech-induced loneliness and training them how to identify anti-democratic fake news would serve to curb both democratic backsliding and the corresponding growth of authoritarianism. Treating these as distinct endeavors to be orchestrated by separate groups would be needlessly inefficient, given their shared objective of reforming digital learning in a way that nurtures democratic values. This efficiency argument extends to efforts to regulate the tech sector, an essential task for combatting both intentional and unintentional forms of DA. A unified, more holistic approach to policy reform is clearly required. One of the chief advantages of the expansive promotion-based definition is that it would conceptually facilitate this kind of embracing approach.

4.2 Retaining a Distinction

Instead of outright discarding the intention-based definition, we should reconceptualize it as a subspecies within the broader framework of DA. In adopting the promotion-based definition, it is nonetheless vital that we continue to distinguish between intentional and unintentional types of DA. This is crucial because distinct types of DA are often going to call for distinct types of remedial action. Based on the preceding sections, we can draw the distinction as follows:

  1. A.

    Intentional DA: Where a repressive agent intentionally leverages digital technologies to promote their authoritarian ends.

  2. B.

    Unintentional DA: Where digital technologies systematically foster authoritarianism without this being intentionally caused by a politically repressive agent. (Note: In practice, we can usually only label cases that seem to fit this definition as ostensibly unintentional, recognizing that we might later discover that they were in fact intentionally caused by an authoritarian agent.)

Distinguishing between intentional and unintentional DA is imperative because intentional DA calls for types of remedial action that would be ineffective in the face of unintentional DA, such as, perhaps most obviously, punitive sanctions. Those currently engaged in the fight against DA consistently advocate for democratic nations to impose economic and political sanctions on the authoritarian agents responsible for intentional DA. These punitive measures are meant to disincentivize DA, making it more costly and therefore less attractive as a political tool. But it would be futile to impose punitive sanctions on non-authoritarian agents that are in all likelihood inadvertently engaging in DA – such as, for example, the Norwegian government in the benign surveillance case discussed in Section 3.1. In such cases, a more effective approach would be to expose the ways in which the agent in question – a political regime or tech company for instance – is unintentionally engaging in DA. The expectation would then be that the unknowingly offending agent would, upon realizing the detrimental effects of their actions, desist as soon as possible. But if they persist in spite of being made aware of the fact that they are systemically promoting authoritiarianism, then the case of unintentional DA would transform into one of intentional DA, as it would now be clear that the agent is knowingly engaging in DA. This scenario would then warrant punitive measures designed to disincentivize intentional forms of DA.

So, while there are practical benefits to conceptually grouping intentional and unintentional DA under the expansive promotion-based definition, there are also tangible advantages to treating them as distinct subcategories. This approach allows for nuanced and context-specific remedial strategies, ensuring a more targeted and effective response to diverse manifestations of DA.

5 Conclusion

The central aim of this paper has been to demonstrate that the intention-based definition is unsustainable as a general definition of DA. This claim should by now be incontrovertible. Section 3 delineated various types of DA that are evidently not intentionally caused by agents seeking repressive political control. Further, we observed that many of these counterexamples are treated as instances of DA by experts who explicitly adhere to the intention-based definition. Nonetheless, we found that these experts miss potentially severe cases of DA, such as tech-induced loneliness, which do not fit the underinclusive intention-based definition. In Section 4, I introduced a novel definition – the promotion-based definition – and showed how it improves upon its flawed predecessor. This alternative remains schematic and is offered only as a starting point for further discussion. Elaborating a comprehensive definition of DA is a task that goes beyond the scope of the current study. While some might perceive this analysis as an exercise in academic hair-splitting, it is anything but. For those dedicated to curtailing democratic backsliding and the concurrent rise of authoritarianism, it is essential that experts formulate a more coherent understanding of how digital technologies are driving these trends. A definition of DA that encompasses intentional and unintentional forms alike, as well as instances where agency is unclear, should empower theorists to form an integrated plan of action. It should also enable them to identify a wider spectrum of forms of DA.

The approach of this study has been predominantly critical and negative. I have primarily focussed on how experts, activists and policymakers can identify and obstruct DA in its various guises. However, one of the most effective tactics for counteracting the repressive, authoritarian tendencies of digital technology is not merely to prevent these, but to engineer and regulate technology such that it actively favors democratic flourishing. The critical thrust of this paper should therefore not blind us to the complimentary need to vigorously enhance the democratic potential of the digital sphereFootnote 14.

One important finding of this study is that critics of DA, and indeed of authoritarianism in general, would do well to pay closer attention to the myriad ways that authoritarianism can be fostered without the malevolent, repressive input of authoritarian agents. While experts are right to focus their energies on political regimes that are intentionally abusing digital technologies, we should take care not overlook the impersonal social, technological, and market forces that surreptitiously push democratic citizens into increasingly authoritarian modes of collective behavior.