It is well-recognised that cognitive irrationalities can be exploited to influence behaviour. ‘Hypernudging’ was coined by Karen Yeung to describe a powerful version of this phenomenon seen in digital systems that use large quantities of user data and machine learning to guide decision-making in highly personalised ways [1]. Much of the discussion as to how such nudging threatens autonomy concerns its manipulative, hidden nature [2]. An individual cannot ‘self-govern’ if they are being influenced in ways that they are unaware of, even upon reflection. This reasoning implies that so long as an individual can be made aware of a nudge in the right way and at the right time, autonomy is respected. On this view the relevant debate is how such adequate notice can be given.

However, Yeung and others point towards “deeper societal, democratic and ethical concerns” [1] that merely providing notice cannot alleviate. In this paper I look to elucidate one concern of this sort by focusing specifically on hypernudging and how its use in social media constitutes a threat to autonomy with respect to moral judgments (henceforth ‘moral autonomy’ for ease).Footnote 1 Moral judgments can be understood here as “evaluations (good vs. bad) of the actions or character of a person that are made with respect to a set of virtues held to be obligatory by a culture or subculture” [3]. A threat to moral autonomy is of real concern because moral judgments and their associated beliefs impact how society operates at different levels. While defending this assertion is outside the scope of this paper, some examples of defensible positions are illustrative: Moral judgments can influence individual behaviour in informing what is considered ‘right’ or ‘wrong’ action; they can provide a foundation of social cohesion or a source of conflict; more broadly, they can inform which individuals and institutions are held accountable and in what ways.

In the first section I introduce a psychological model that describes two cognitive routes by which humans reach moral judgments and the conditions under which each is favoured. In the second section I apply a historical conditional account of autonomy to this model to determine the circumstances in which moral autonomy is threatened. In the third section I describe how hypernudging within a social media context creates the relevant problematic conditions so as to constitute a threat to moral autonomy. In the fourth section I explore some practical measures that could be taken to protect moral autonomy. I conclude with some indicative evidence that this threat is not experienced uniformly across all societies, pointing to interesting future areas of research.

1 How moral judgments are reached

1.1 Two routes to moral judgment

To explore how hypernudging constitutes a threat to autonomy, it is helpful to adopt a psychological model (henceforth ‘the model’) which describes how humans form moral judgments. The model I have chosen is proposed by Jonathan Haidt and recognises two co-existing routes to moral judgment — one predominant, one much less common [3, Fig. 1].

Fig. 1
figure 1

Adapted from [3]. The numbered links are all focussed on A’s cognition. I refer to links 1–4 as the ‘social intuitionist route’ and 5–6 as the ‘critical reflection route’. The links of focus in my discussion are (2) the social persuasion link and (5) the reasoned judgment link

The model.

The first, predominant, route—which Haidt argues has been under-emphasised by psychologists throughout the 20th century—contends that moral judgments follow directly from intuitions, which are strongly influenced by social interactions with others. He calls this the ‘social intuitionist’ model and it is represented in Fig. 1 by steps numbered 1–4. According to this route, when we make a moral judgment we “quickly, effortlessly and automatically” [3] arrive at an answer without consciously being aware of the process taken to reach it (step 3). Moral reasoning is, on this view, a post-hoc rationalisation that functions to provide a coherent and acceptable explanation for something that we already believe (step 4). Footnote 2 Furthermore the intuitions that determine our moral judgments are understood to be heavily influenced by the reasoning and judgment of others (steps 1 and 2). To illustrate how this model works and how common it is, Haidt recounts a hypothetical story. In this story, siblings having a summer fling: a brother and sister have safe sex, and while they decide not to do it again, their relationship is deepened, not harmed, by the interaction. When presented with this story, and asked to share and explain their view, Haidt has found that listeners try but fail to find reasons to justify their intuitive judgment that the situation is wrong. For instance they initially raise concerns about inbreeding before recalling that the thought experiment explicitly precludes this issue.

The second route allows that critical reflection and the use of reasoning can override an intuition and guide moral judgment directly (steps 5 and 6). Haidt suggests this happens only rarely, in cases where “the initial intuition is weak and processing capacity is high” [3]. Footnote 3 According to Haidt’s model then, we reach moral judgments typically by socially influenced intuition and on rare occasions via critical reflection.

1.2 Conditions under which each route operates

As-is, the model does not fully elucidate how, why and under what conditions each route operates. A deeper understanding will be critical to enable me to assess how hypernudging in social media contexts plays into these processes. Daniel Williams’ recent paper on socially-adaptive belief outlines a set of plausible hypotheses that provide this additional context [4].

Williams’ core hypothesis is that “belief formation is sensitive to social rewards and punishment” [4]. While forming and holding beliefs that are true will tend to be advantageous in enabling individuals to make accurate decisions –for instance a true belief about the location of food is critical for survival –our nature as highly social animals can create conditions in which it is advantageous overall to hold false beliefs. This is particularly so if the social benefits of doing so are high and the practical costs are low. Indeed, Williams argues that those who (subconsciously) form beliefs in a way that is sensitive to social rewards and punishment will, on average, achieve greater “practical success” in the real world. Footnote 4 The suggestion is that humans have evolved belief formation which is sensitive to social rewards and punishment and that different social contexts create different costs and benefits such that beliefs are, in practice, more or less causally dominated by social influences. Footnote 5

The following example, drawn from a study concerning views on climate change, both illustrates and provides support for Williams’ account:

High social benefits: “Because positions on climate change…have come to signify…loyalty to identity-defining affinity groups, the stances people take on the issues can be very consequential for their social status” [5].

Low practical costs: “An ordinary person cannot meaningfully affect the climate, for example, through his or her beliefs or through his or her actions as a consumer, voter, or public-debate interlocutor…as a result, any mistake an individual makes about the science of climate change won’t affect him or her or anyone else that person cares about.” [5].

This case is an example of identity-protective cognition which Williams highlights as the strongest documented example of socially adaptive belief. Identity-protective cognition is “the tendency of individuals to sample and process information in ways designed to protect their status as members of desirable groups or subcultures” [4]. This occurs in cases where individuals come to appreciate, unconsciously, that maintaining a certain position is very important for signalling the side they represent. Information is processed in such a way as to favour the creation of beliefs that are consistent with an identity ahead of those that are factually accurate [5].

Applying Williams’ theory to Haidt’s model, it is plausible to expect the social intuitionist route to impact moral judgments in high benefit, low cost social conditions. As an example, an evangelical Christian in a strict community may view homosexual acts as morally wrong and justify this to others in religious terms. The social benefits of adopting the religiously-accepted moral beliefs and associated judgments are high and the practical cost of accepting a belief that is wholly socially determined may be low (for instance if one is not in fact homosexual and does not know any homosexuals and therefore confronting this belief may never be necessary). This would be a case of identity-protective cognition because the belief in question is conspicuously linked to their religious membership.

In the remainder of this paper I will focus on identity-protective cognition as a subset of socially adaptive beliefs, and its role in two particular steps in Haidt’s model, one in each route. In the social intuitionist route I will focus on step 2 which Haidt refers to as social persuasion. In the critical reflection route I will focus on step 5 which Haidt refers to as the reasoned judgment link. The connection between Williams’ theory of socially adaptive belief and these steps of Haidt’s model is clear, and will provide a sufficient basis to demonstrate how hypernudging in a social media context undermines autonomy with respect to moral judgments. In the following paragraphs I describe these steps.

1.3 The social persuasion link (step 2)

Haidt proposes that “the mere fact that friends, allies, and acquaintances have made a moral judgment exerts a direct influence” on the genuine beliefs held by others (not only what they say they believe) [3, see also Fig. 1]. Identity-protective cognition provides both an explanation and evidence for this. For example, one study shows that “people are much more likely to engage with and to accept misinformation when it accords with rather than defies their political predispositions.” [5]. This can be explained on Williams’ account with reference to social conditions.

1.4 The reasoned judgment link (step 5)

Identity-protective cognition also supports Haidt’s hypothesis that it is very rare that individuals’ minds are changed on the basis of engaging in reasoned judgment (step 5). Kahan and colleagues have demonstrated that highly numerate individuals, ordinarily capable of assessing whether a certain hypothesis is supported by evidence from an experiment, lose this ability if the hypothesis is inconsistent with their group’s position on a particular issue [4].

In summary, Haidt’s model gives us two key routes to reaching moral judgments, one much more common than the other. Williams adds to this understanding by proposing both why we might engage in the social persuasion, and why engaging in reasoned judgment might be rare - doing so is socially adaptive. Under conditions where the social benefits are high and the practical costs are low, these routes are emphasised/restricted respectively. This will be important in Sect. 3 for understanding whether and how hypernudging plays a part in these processes.

2 The conditions under which moral judgments are reached autonomously

In this section I will investigate the conditions under which we reach moral judgments autonomously. I will argue that identity-protective cognition undermines autonomy and that this impact increases the more the social conditions of socially adaptive belief pertain.

I will begin this section with various intuitions. The first is that the social intuitionist route is non-autonomous; after all, it is automatic and heavily influenced by what could colloquially be called ‘social pressure’. However, it is necessary to be more precise. If all automatic behaviour counts as non-autonomous then most of what humans do is non-autonomous. This is a problem given the connection between autonomy and responsibility. This same sceptical conclusion results if all social influence is taken to undermine autonomy, given the significant role social influences plays in the formation of many human beliefs. Some social influences might intuitively be considered to enhance autonomy - for instance the parent who teaches a child to reflect critically on the world around them; and some social influences might intuitively be considered oppressive and remove autonomy, such as in the context of religious cults. An adequate conception of autonomy needs to account for this.

A second intuition is that the ‘critical reflection route’ represented by steps 5 and 6 is definitely autonomous. Reflecting on reasons to form a moral judgment would seem to epitomise autonomous moral agency. However in the analysis of social persuasion above (step 5) I noted that identity-protective cognition can interfere with critical reflection. The impact of this on autonomy must be investigated. Here I will introduce the concept of autonomy and a particular account that can clarify the intuitions described.

At a high level, “to be autonomous is to govern oneself” [6] and this is usually taken to require two key sets of conditions—of competence and of authenticity. To be autonomous requires that an individual has the “cognitive, psychological, social, and emotional competencies to deliberate, to form intentions, and to act on the basis of that process” [2]. Additionally, autonomy requires that individuals are able to critically reflect on their values, their beliefs and their desires and act for reasons that they can authentically endorse as their own.

I will adopt an account of autonomy put forward by John Christman which is sensitive to how values, beliefs and desires are formed. According to Christman an individual is autonomous with respect to a characteristic (for my purposes a moral judgment) if and only if, were they to reflect on the historical process by which they reached it, they would not feel alienated. Alienation is understood here as a negative judgment or emotional response [6, 7].

This view clarifies the relationship between autonomy and automatic behaviour. Automatic behaviour is not in and of itself non-autonomous. Instead, whether it is autonomous depends on whether the individual in question would endorse the historical process that led to it. As I have argued with Haidt and Williams’ work above, this process of reaching moral judgments includes social influences. The question remains then as to under what circumstances social influences undermine autonomy.

Does socially adaptive belief, by its very nature, undermine autonomy? No. The critical question, on Christman’s account, is whether a person would feel alienated were they to reflect on the process that led to the belief’s formation. This question can be applied to the two examples of social influence introduced above. In both the case of a child being taught by their parents to reflect critically on the world around them, and in the context of a religious cult, there arguably exist the conditions of identity-protective cognition. The process of teaching children to reflect on the world around them involves a combination of explaining reasons and creating incentives of social reward and punishment to develop appropriate intuitions. The parent creates the environment in which the social benefits of developing these beliefs is high and the practical costs of holding them is low. They may also link these beliefs to some sort of identity in the child such as the idea of being a good or kind child if they believe or behave in certain ways. In the context of a religious cult, similarly, the social benefits of adopting certain beliefs, such as “homosexuality is wrong” may be conspicuously linked to religious identity. Believing it in this context would bring social benefits and few practical costs, as described above.

However, despite this similarity, Christman’s definition of autonomy can account for the intuition that an adult who has been taught to reflect critically on the world around them as a child is autonomous with respect to those beliefs, whereas an adult that continues to live in a cult is not. In the first case the adult, reflecting on the historical process by which they were, as a child, influenced to develop a belief about the importance of critical reflection, would recognise it as social and also endorse it. By Christman’s definition, they would be autonomous with respect to those beliefs. In contrast, in the case where an adult still exists in the cult context, they are systematically prevented from being able to reflect in the appropriate way necessary for autonomy. Think back to the experiment above in which otherwise competent individuals lost the ability to evaluate evidence if it supported a hypothesis that conflicted with the views of their respective group. In an important sense one cannot endorse a historical process that systematically undermines one’s appreciation of true reasons that ought to inform your view. In fact, Christman acknowledges this challenge by adding a further condition to his account of autonomy: a requirement that “the reflection being imagined is not constrained by reflection-distorting factors” [8]. For the purposes of this paper reasoned judgment can be considered autonomous, so long as it is not distorted by identity-protective cognition.

While accounting for the intuitions described, Christman’s definition also allows for the case where someone leaves a cult (and therefore those social conditions) and can, in principle, become autonomous with respect to their religious beliefs. This is because they come to have independent reasons to adopt them that were not simply due to their time within the cult [9]. Consequently, they may endorse the historical process by which these religious beliefs have now been formed. The discussion in this section identifies that it is when the conditions of identity-protective cognition pertain with respect to a certain moral judgment, that moral autonomy is undermined.

The picture that I have painted suggests that autonomy exists on a spectrum and can be threatened by certain conditions. At one extreme, social persuasion has fully co-opted an individual’s critical reflection such that they can no longer reassess their views adequately. Individuals would, in principle, feel alienated when reflecting on the historical processes that led to their moral judgments, and yet cannot perform this reflection due to the motivated cognition they are engaged in to avoid this reflection. The more pronounced the incentivising social conditions and the more ideologically linked the judgments, the greater the autonomy-undermining effect will be.

In the next section I will introduce the concept of hypernudging. I will investigate its use in social media, focussing on newsfeeds. I will then argue that its use creates the conditions for identity-protective cognition and thus threatens moral autonomy.

3 The autonomy-undermining conditions of hypernudge-fuelled social media

3.1 Hypernudging and its use in social media

The term ‘hypernudging’ was coined by Karen Yeung to describe a distinct and more powerful manifestation of the traditional ‘nudge’ concept which emerges in digital contexts where large-scale machine learning is used to guide decision-making [1]. The traditional concept of ‘nudge’ was understood by Thaler and Sunstein to refer to “any aspect of choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives” [10]. Due to the various reliable irrationalities that feature in human decision-making, aspects of the context in which decisions are made (the ‘choice architecture’) can be intentionally changed to lead decision-making in one direction or another.

In decision-guidance systems, such as social media newsfeeds, that make use of large quantities of user-data and machine learning techniques, choice environments can be highly tailored. Instead of relying on cognitive and affective vulnerabilities of all humans, these systems can identify individual vulnerabilities based on “the target’s constantly expanding data profile” [1]. The design of the choice architecture can be dynamically refined based on data feedback about what decisions the user makes in their ongoing interaction with the system to increase the likelihood of a ‘preferable’ choice, as defined by the optimisation metric of the system. In addition to the insight gathered about the target directly, choice architecture can be refined on the basis of population-wide trends in decision-making behaviour. This means that by analysing the data of many individuals and using machine learning to cluster them into groups, these systems can identify vulnerabilities that an individual is likely to be sensitive to due to their impact on similar individuals [1]. Through these techniques decision-guiding systems can dynamically and unobtrusively control the design and the content of the decision-making environment.

In the case of social media, the autonomy-impacting conditions created result from the interaction between their use of hypernudging, their business models and human nature. The advertising business models of social media companies incentivise engagement of users with the platform [11]. Using hypernudging techniques the information served to users, and how the information is presented, is highly personalised and optimised to maximise engagement for each user. For instance, in Facebook and Twitter newsfeeds, the content and the order in which it is presented is personalised to increase engagement [12]. As a result, these algorithms will instrumentally promote any content that in fact motivates engagement. Humans, as social animals, are incentivised, and thereby motivated, to express moral judgment that aligns with moral norms. For example, it has been shown in psychological studies that expressing moral judgments when the violation of a moral norm is perceived leads to reputational rewards [11].

According to Williams, identity-protective cognition is most likely to occur when the social benefits to the individual of forming beliefs based on social persuasion are high and practical costs of holding them are low. These conditions are present in social media and they drive both the spread of moralised content and the success of social persuasion i.e. a tendency for it to be incorporated into genuinely held beliefs for socially adaptive reasons. As described in the previous paragraph, the conditions arise as a result of the combination of business model, hypernudging and human nature. In the following sections I describe them.

3.2 The social benefits of expressing moral judgments on social media

In an offline environment the reputational rewards associated with expressing ‘correct’ moral judgments require the proximity of another person, online these views are broadcast widely. Research suggests that moral-emotional language causes content to spread further than content without such language or with non-moral emotional language. Footnote 6 One study that investigated a large sample of tweets related to three morally contentious topics (gun control, same-sex marriage and climate change), found evidence that “the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word” [13]. This spread provides evidence that content expressing moral judgment motivates user engagement. That it spreads also further incentivises the expression of such moral judgment for reputational reward. This feedback mechanism in which engagement with content drives greater exposure of that content within the network is facilitated by hypernudging.

The spread of moral judgment described so far largely relies on the exploitation of vulnerabilities that impact all humans equally –those that are often exploited in traditional nudge scenarios [2]. However, hypernudging has at least two further tools that impact the conditions of social media that are of interest. Given the increasing data profile social media companies have for users, content exposure is tailored to what it knows about the individual user and additionally, what it knows about what engages other users that are ‘similar’ in relevant ways [1].

The research on identity-protective cognition presented in Sect. 1 would predict that social media companies that can tailor choice environments in this way would see the spread of moral judgment within ideological groups in particular. This is indeed what has been found. In the Twitter study discussed above Brady et al. found that moral-emotional language spread particularly within networks of users that shared political ideology [Fig. 2]. Footnote 7

Fig. 2
figure 2

sourced from [13]. “Network graph of moral contagion by political ideology. The graph represents a depiction of messages containing moral and emotional language, and their retweet activity, across all political topics (gun control, same-sex marriage, climate change). Nodes represent a user who sent a message, and edges (lines) represent a user retweeting another user. The two large communities were shaded based on the mean ideology of each respective community (blue represents a liberal mean, red represents a conservative mean).”

Furthermore it is in precisely these conditions - in which content is both morally-charged and ideologically linked - that the social benefits of social persuasion are greatest. As in the climate change example above such judgments become badges of social membership.

3.3 The practical costs of expressing moral judgments on social media

In the previous section I described how hypernudging in social media contexts created the conditions of high social reward. In this section I will argue that it equally creates the conditions in which the practical costs of forming moral judgments on the basis of social persuasion are low.

Expressing moral judgment risks retaliation [11]. On social media, moral judgment can be expressed either by putting out content directly, or reacting or responding to moral judgment already circulating. The risk of retaliation is low in both cases as a result of hypernudging.

When it comes to putting out content directly, in contrast to offline contexts, physical retaliation is unlikely: one can get the reputational benefits by condemning violations of those that you do not know and do not know where to find you. Additionally the risk of non-physical retaliation is low: the study above, among others, demonstrates that moral judgments are more likely to be spread within sympathetic audiences, that are less likely to retaliate [13, 15]. As argued previously, it is hypernudging, in combination with optimising for engagement and human nature that leads to this pattern of spreading.

In the case of reacting or responding to moral judgment already circulating, the risk of retaliation is low due to the ease of “piling on”. As an illustration of this Crockett recounts the tale of Justine Sacco: “a woman who tweeted a comment about AIDS in Africa that many perceived to be racist. Within hours, she became the top trending topic on Twitter as millions of strangers around the world piled on the shaming bandwagon” [11]. The likelihood of Sacco’s original expression ‘leaking out’ of a sympathetic audience would have been low. However, once it is circulating, being one of many thousands expressing moral judgment in response means that it is less easy to be singled out for retaliation. The hypernudging dynamics encourages this by serving content where the barrier to “piling on” is lowest for each individual.

Sacco’s case could be seen as a counterexample to my argument in this section in illustrating the risks of retaliation and hence the costs of ‘incorrect’ beliefs. However, given that the likelihood of Sacco’s original expression ‘leaking out’ of a sympathetic audience was low, the risk to her of retaliation was low. Therefore, this case is consistent with the position that those directly posting moral judgments can expect low practical costs.

It could be argued that the practical costs might be perceived to be higher from the perspective of those posting moral judgments having already seen what happened to Sacco. In a simplistic model where risk is likelihood multiplied by severity, seeing someone being pilloried for expressing moral judgments may cause them to (subconsciously) reassess the practical cost of doing so themselves, at least for some time. For a time the cost may seem much higher (and then it might go down). One could imagine a scenario where a new morally-charged, ideologically-linked theme emerges on social media. Initially, the conditions are those that pertain to identity-protective cognition—no one has yet been pilloried on the topic and the risk of retaliation is perceived to be low. In these conditions autonomy is undermined. If someone’s comment leaks out and they get pilloried for expressing a moral judgment on this theme then this may change the social conditions—practical costs are perceived as higher. The moral judgments related to this topic have already been set by a non-autonomous process and users may now engage in self-censorship. Perhaps during the period in which the social conditions are perceived to have changed, identity protective cognition is reduced and, in principle these beliefs are more open to critical reflection.

However, this argument would depend on the assumption that when a user witnesses someone else bearing the practical costs of expressing moral judgments, they are sufficiently similar so as to impact their perception of their personal risk. Most prominent cases of individuals bearing significant practical costs due to expressing the ‘wrong’ belief on social media relate to politicians or celebrities such as J K Rowling [16]. Even in the case of Sacco who was not otherwise considered a celebrity, it was “the fact that she was a P.R. chief [that] made it delicious” according to the blogger who originally retweeted and set off the backlash [17]. Journalist, Jon Ronson, noticed that ordinary victims of public shaming on social media were not often observed. He conducted an investigation to uncover these cases and physically meet the individuals to understand the practical costs more viscerally [17]. The investigation highlights three relevant patterns regarding the perception of the practical costs of expressing moral judgment on social media. Firstly, his investigation specifically responds to the fact that, in contrast to public figures, there is little public record, available to everyone, that documents ordinary people who have borne practical costs. This means that witnessing such shaming relies on the low probability of having been exposed to a specific conversation. Secondly, the practical costs of the shaming, in terms of the emotional toll and on-going struggles to find work, are not witnessed online—Ronson had to visit victims to appreciate it. Thirdly, Ronson reflects on the fact that the backlash observed was unpredictable at best: “[I] began to marvel at the disconnect between the severity of the crime and the gleeful savagery of the punishment” [17]. These patterns suggest that when subconsciously evaluating the practical costs of expressing moral judgments on social media, any given ordinary user is unlikely to have observed the real costs to a comparable person and is unlikely to be able to predict that their comment may garner a similar reaction. As well as supporting the claim for the low practical costs of expressing moral judgment on social media, this section raises the possibility that the risk to moral autonomy might depend on the public profile of the individual—prominent people may perceive the practical risks more keenly than ordinary people. Ordinary people would be exposed to more pronounced conditions of indentity protective cognition and therefore would be at greatest risk of their moral autonomy being undermined.

In summary, it is the hypernudge-fuelled dynamics of information sharing on social media that can serve to users the content that they find most engaging. Information gathered on the individual and on population-wide trends is what enables the shape of spreading of moral judgment that has been found on social media. It creates the conditions in which individuals are systematically exposed to moral judgments and incentivised to adopt them for socially adaptive reasons. In particular, hypernudge-fuelled social media promotes the conditions for identity-protective cognition. As I argued in Sect. 2, this constitutes a threat to moral autonomy.

4 Practical steps to protect moral autonomy

As well as highlighting the risks, the case made in this paper indicates a practical way to approach protecting moral autonomy even while hypernudging and the advertising business model of social media companies remain. Anything that undermines the conditions for identity protective cognition will reduce the degree to which moral autonomy is threatened; in turn this requires either reducing the spread of content that is both morally charged and ideologically linked, reducing the social benefits of expressing these judgments, or increasing the practical costs of doing so. Here I consider a couple of measures that could be taken in this regard. In the main, these are measures that have been proposed to tackle other problems such as the spread of misinformation, but that might be harnessed to protect moral autonomy.

The social benefits of expressing moral judgments on social media depend on the degree to which they spread. The “Facebook Files”—the Wall Street Journal investigation based on a leak of documents from ex-Facebook employee Frances Haugen—provides substantial insight on what is possible technically and feasible to reduce the spread of information.

Internal research from Facebook’s Integrity Team demonstrated that the specific mechanics of the ‘hypernudge’ algorithm used within the Facebook newsfeed are likely to impact the degree to which moral judgments spread. A 2018 change to the newsfeed algorithm known as ‘Meaningful Social Interaction (MSI)’ changed the weight given to content that was actively engaged with via comments or likes, and emphasised content interacted with by a users’ close contacts [12]. One aspect of this algorithm, ‘Downstream MSI’, would attempt to predict what content would go viral and increase its prominence. The leaked files showed that Facebook researchers identified that this change increased the spread of “outrage” [12]. The research highlights how the type of ‘hypernudge’ algorithm adopted matters in the degree to which it encourages the spread of moral judgments and therefore creates social benefits.

The leaked research proposed several technical measures that could reduce the rapid spread of “devisive and sensational” content while not impacting the company’s margin from advertising revenue in a “meaningful fashion” [12]. Examples include removing the virality prediction from newsfeed algorithm, or removing the reshare button.

When presented with this evidence by his internal team as a way to tackle the spread of misinformation, Mark Zuckerburg chose not to make these changes wholesale and instead to develop tools to have in reserve—to deploy in cases where there was a particular issue with misinformation. For instance the emphasis of virality prediction in the newsfeed algorithm was reduced specifically for “civic and health information” [12] in Myanmar at a time when Facebook was being accused of feeding into human rights violations against Rohinga muslims [18]. The decision not to apply this measure was taken through fear that “good” virality would also be decreased [12]. The Facebook case suggests that while there are technical ways to reduce the social benefits of a ‘hypernudge’ algorithm, the culture and business model of social media companies may not enable them to be implemented.

Several regulatory measures have been proposed in a bid to change incentives to force these changes. For example in the United States, Senator Josh Hawley has proposed the Social Media Addition Reduction Technology Act that would make it a legal requirement for social media companies to reduce the speed at which content is transmitted across platforms [19]. In another measure, a tax has been proposed on targeted advertising to reduce the attraction of an advertising business model, which may in turn change incentives around the algorthmic mechanics that may be adopted [20]. Other efforts from civil society have attempted to use public pressure to force the adoption of technical fixes which internal pressure alone could not achieve. A prominent effort has been driven by the Centre for Humane Technology with its #oneclicksafer campaign which aims to gather support from policy-makers and tech workers to force Facebook to allow only two reshares of content before having to manually copy and paste [21]. Internal Facebook research suggests that reducing reshare in this way, rather than removing it completely, would dramatically reduce the speed and spread of information while protecting the ability for users to share information they think is important [12].

Identity-protective cognition is also most likely where moral and ideological ideas become conspicuously connected. Twitter’s decision to stop political advertising on its platform in 2019 serves as an example of a measure that might reduce the spread of content that is both morally charged and ideologically linked. Footnote 8 Removing political advertisements might be expected to reduce the likelihood or quantity of politically-related moral and ideological content circulating on Twitter and therefore reduce the likelihood that such content is shared.Footnote 9 Facebook’s adjustment of their algorithm to certain topics in Myanmar, mentioned above, may be reconsidered in this context—reducing the spread of content related to civic and health information in Myanmar could be viewed as a measure that targets the spread of particular content which is particularly morally-charged and ideologically-linked in a particular place, at a particular time. While this geographically-limited application of constraints is criticised as insufficient to tackle the rampant spread of misinformation online, the argument in this paper suggests that it might be a case study of a relevant measure to protect moral autonomy. However, implementing a set of tools that slow the spread of content related to specific topics that are morally-charged and ideologically-linked, and are sensitive to particular geographical regions, is likely to be fraught with practical and cultural limitations.

The second principal route to protecting moral autonomy prompted by this paper is to increase the practical costs of sharing moral judgments on social media. As discussed above, even though careers are lost due to social media ‘pile-ons’, I have argued that the perception of practical costs for ordinary people is low. How could the practical costs be made more salient? One option would be to increase the frequency of practical costs. For instance, social media companies could occasionally cross-pollinate content into unsympathetic clusters at random to increase the frequency of ‘pile-ons’. It seems likely that this would generally dampen sharing and therefore meet the same pushback from social media companies as seen with Zuckerberg’s comments above. It therefore seems necessary to rely on measures external to social media platforms. Journalistic investigations that highlight the practical costs borne by ordinary people, such as Ronson’s referred to in this section, might play a limited role in changing the conditions for identity-protective belief.

This section investigated how the case made in this paper might suggest practical ways in which moral autonomy may be protected, even while hypernudging and the social media advertising model are still in place.

5 Conclusion

In this paper I have painted the following picture: An individual begins their use of social media with a particular set of moral intuitions. According to Haidt’s model it is unlikely that these were reached through critical reflection. The use of social media will tend to reinforce these intuitions, whatever they are. This is because the context of social media, in which engagement is valued above all else and in which hypernudging is employed, will create the conditions in which moralised content is spread and in which individuals are encouraged to form socially adaptive beliefs that align with their original intuition. These conditions are exacerbated for cases where moral ideas are linked with identity.

These conditions undermine autonomy. Were an individual to reflect on this causal influence on their moral judgments they would feel alienated. They cannot account for the impact nor the way in which it systematically undermines their capacity for critical reflection. It does so by strengthening their existing intuitions and making them more likely to subconsciously ignore or discount reasons that should inform their judgement.

A critic could raise the following challenge: Given the argument, put forward above, that critical reflection concerning our moral judgments is rare, the threat to moral autonomy is accordingly limited—it is unlikely that an individuals’ intuitive moral judgments would have been reflected on even if they had not been strengthened by the influence of hypernudging on social media.

This challenge rests on the assumption that individuals carry intuitions for all relevant moral judgments at the outset of their social media use and it is only these judgments that are threatened. This assumption is misplaced. If moral judgments are informed by intuitions (step 3) that in turn are informed by social persuasion (step 2) then there is no reason to believe that the social conditions in which an individual has been brought up has caused them to form an intuition concerning every morally salient scenario. There might be existing conundrums that the given individual has not been exposed to. For instance, the question as to whether killing civilians is justified in war if they are a young adult in a conflict-free democracy. Novel moral puzzles may emerge in appraising new events (and are discussed on social media) such as whether to condone AI friend bots. Finally, while individuals may come with existing intuitions, they will be undecided on many of the important nuances of their judgments that emerge in real life. For instance, someone might be pro-choice up to a certain developmental point but be uncertain about where to draw the line.

These cases are equally vulnerable to the autonomy-threatening impact of hypernudging. The first case serves as a useful example. Suppose an individual has grown up in a particular social grouping which has furnished them with a set of moral intuitions. The use of hypernudging on social media will cluster them with ‘similar’ people to inform how content is served to them. There is no reason to think that this clustering will match exactly their original social grouping. As such they are likely to be exposed to moral judgments that are outside their original set of intuitions but within the equivalent set for other individuals that are clustered in the same group. As argued in the paper, due to the social conditions set up by the use of hypernudging in social media, this exposure will directly inform their moral intuitions through social persuasion under the conditions of identity-protective cognition, and therefore in a way that threatens their autonomy. Taken together, these cases serve to illustrate that the circumstances in which moral autonomy is threatened are not as rare as Haidt’s model might originally suggest.

What makes the use of hypernudge-fuelled social media a threat to an individual’s moral autonomy, as compared to not using it, is the way in which it will tend to change the conditions under which they form moral judgments. The engaging nature of these platforms will tend to increase the proportion of their time spent under conditions of identity-protective cognition; social media is more likely to expose them to new moral spheres than in offline contexts due to the propensity for moralised content to spread and dynamics such as the clustering described above. This combination will tend to lead to individuals having a greater number of moral beliefs formed in a way that lacks autonomy. Indeed, the alternative to forming moral intuitions due to social persuasion is not only critical reflection, but also to have no opinion at all.

This paper also suggests that certain societies will be particularly at risk. Cultures in which moral and ideological ideas become conspicuously connected will be cultures in which the autonomy-threatening effect of social media is most pronounced. These will be cultures in which individuals are otherwise incentivised to have a stronger ideological stance. While further research is needed, this prediction seems to be borne out in existing research. For instance, polarisation patterns in social media have been identified in bi-partisan political systems, such as in the UK or the US, but are less clear in multiparty systems [14].

It could be objected that in these societies hypernudging presents no additional threat to moral autonomy than is already faced offline: If an individual already had a strong identity, then they were already impacted by identity-protective cognition and the associated threat to autonomy. As Kahan puts it, “Individuals’ cultural predispositions exist independently and are cognitively prior” [5]. Is social media really a threat to our autonomy or is the evidence just reflecting cultures that were not morally autonomous in the first place? The response to this objection can be broken down into two parts. Regarding existing intuitions, hypernudging in social media exacerbates the threat that identity already presents to moral autonomy. Intuitions are strengthened by exposure and by the conditions of socially adaptive belief, in particular, of identity-protective cognition. Identity-protective cognition “diverts individuals…from using their reasoning to recognise” other morally relevant reasons for their judgments and “instead redirects it to conforming their beliefs to ones that predominate in their cultural group” [5]. Additionally, as argued above, it is not only existing intuitions that are at risk—hypernudging will tend to expose individuals to new moral scenarios. For instance, the clustering described above effectively exposes users to new ‘cultural groups’.

The reflections in this paper suggest that hypernudging in social media contexts does not present a uniform threat to all cultures and all individuals. Shannon Vallor has noted:

“Technologies neither actively determine nor passively reveal our moral character, they mediate it by conditioning, and being conditioned by, our moral habits and practices…At present, they too often shape it for the worse” [23].

Further research into cross-societal differences would both lend support to the argument made in this paper and inform further investigation into the extent of the threat.