1 Introduction

Dataism is the ascending candidate for overtaking humanism as the West’s master narrative, a term for the metaphysical story that structures a community’s beliefs and provides meaning to experience. Medieval humanists broke with theism, which had posited that divine will was the supreme authority. No longer should individuals or groups try to discern God’s will when making decisions; human thought and emotion should reign supreme. This master narrative transition marked the beginning of the early modern era. Half a millennium later, humanism seems to have played itself out. By empowering the modern individual, liberal humanism set the West on a course of technological innovation and global domination (Henrich 2020). Not only were free markets and liberal democracy underpinned by these new beliefs, but colonialism and other forms of universalizing homogenization were, too. Like much metaphysical fiction, humanism came with a limited shelf life. Habermas (2019) concludes that the humanist creed that arose from the Enlightenment reconciled a line of conflicts—such as those between emotion and reason, and faith and knowledge—in a manner that made for an unstable foundation.

In the modern environment, liberal humanism gave a comparative advantage to the West. This master narrative made the most of new technology through distributed decision-making. Enlisting the brain power of individuals to process an increasing flow of data from expanding markets was more effective than feudal rule or other forms of centralization. In the twenty-first century, we will create computers with processing power that far exceeds that of human brains. Historian Harari (2016) suggests that a certain posthumanist narrative—dataism—could be the most adaptive approach to such an environment. This belief system upholds that big-data algorithms will make the best decisions. The more data and interconnectivity, the better is it; dataism’s highest ideal is the free flow of information (Antosca 2019). Instead of divining God’s will, or consulting human thought and emotion, dataists submit to “the logic of data” (Pedersen and Wilkinson 2019, p. 8). According to this computer-age narrative, “the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing.” This view, argues Harari, “has already conquered most of the scientific establishment.” Making dataism our cultural view, too, he believes, could help humans merge with machines instead of being made redundant by them.

Harari builds on a long tradition of thought. Heidegger’s historicist destruction of metaphysics prepared for posthumanism. From Being and Time (Heidegger 2010[1927]) on, Heidegger’s deconstruction of Western ontology loosened the crucial threads of the humanistic narrative. Propositions presumed universal were accused of being part of a Eurocentric logocentricity, which was further undermined by later postmodernists. The West’s Kantian aspiration for a peaceful federation of like-minded nations reached a curious peak with 1990s Fukuyamaism, then crashed into the increasingly despondent politics of the twenty-first century. Today, liberal humanism no longer seems like history’s inevitable endpoint, but as an ideological narrative in need of a successor. Robbins and Horta call for a cosmopolitanism that makes room for “multiple and overlapping conceptions, forced by the imperative of inclusiveness to change its own rules” (2017, p. 16). Since the end of the Cold War, this discourse has come to be centered around different creeds of posthumanism, of which dataism can be seen as the leading candidate.

To explore posthumanist potentialities, scholars and sci-fi creators have long fed off each other’s contributions in fruitful ways. Stories about androids that awaken to consciousness have been of particular relevance, as they let people experience vicariously the challenge of freeing oneself from humanistic anthropocentrism. The Swedish TV series Real Humans (Lundström 2012–2014) and its British remake, Humans (Vincent and Brackley 2015–2018) offer culturally distinct approaches to posthumanist problematics. The Swedish protagonist family presupposes a social-democratic ethos when they negotiate personhood with Anita, their android servant. This strategy is meant to ensure the Engman children’s continued adherence to parochial values by not letting them treat even a digital other in a non-inclusive manner. The family attempts to bring humanoid robots into the ethical relationship between humans through an emphasis on individual dignity, which is key to the social-democratic ethos. The British remake, Humans, lessens the original’s emphasis on which values produce the good life, instead prioritizing themes of violence and political struggle to suggest how our posthuman future may unfold.

The Heidegger-inspired notion of trace informs why a transition to a posthumanist worldview could be driven by culturally distinct creeds. The German philosopher has been criticized for not breaking strongly enough with humanistic anthropocentrism (Derrida 1982a; Wolfe 2010). Heidegger sided with Nietzsche who proclaimed that a metaphysical system does not start afresh, but arises “from the ashes of the metaphysics that preceded it” (Bishop 2010, p. 705). Because humanism exists in different forms, distinct cultures would bring with them different metaphysical traces if they were to transition toward posthumanist beliefs. Aspects of current humanistic creeds—even anthropocentrism—is likely to remain in one form or another, because “a trace of that which is overcome remains in that which overcomes” (Rae 2014, p. 57). If the twenty-first century will entail a transition toward dataism or other creeds of posthumanism, then—from a Heideggerian perspective—the new master narrative would not “overcome humanism once and for all [because] posthumanism must always tarry with humanism” (p. 66). Future beliefs would evolve from previous beliefs as a response to problems that have long been apparent with humanism, in addition to new pressures from novel technologies.

Robbins and Horta point to how innovations from the First and Second Industrial Revolutions let the West impose their liberal-humanist cosmopolitanism on the world. “Since the beginning of the twentieth century,” they write, conditions permitted

The development of cosmopolitanisms on a new and larger scale: economic and political interconnectedness reaching further down into society, improvements in the technology of transportation and communication transmitting news of distant places and allowing dispersed populations to stay in contact, and so on (2017, p. 8).

Our present era’s Fourth Industrial Revolution seems likely to facilitate new cosmopolitanisms. Additional levels of interconnectedness could make possible global cooperation precisely with room for “multiple and overlapping conceptions.” Several of the emerging technologies entail considerable threat, too. Yet their potential upside is so great that Andy Clark considers it unlikely that humanity will resist temptations of automation, artificial intelligence, and biological enhancement of ourselves, which are technologies that unsettle core humanistic beliefs. Clark argues that

If it is our basic human nature to annex, exploit, and incorporate nonbiological stuff deep into our mental profiles—then the question is not whether we go that route, but in what ways we actively sculpt and shape it. By seeing ourselves as we truly are, we increase the chances that our future biotechnological unions will be good ones (2003, p. 198).

To aid this process of self-understanding, it could be helpful to investigate the culture-specific traces we bring to the posthumanist project. If such an analysis can unconceal internalized presuppositions, humanity could have a better chance at crafting and committing to adaptive narratives for the epoch ahead.

Heideggerian and related perspectives bring our attention to promises and perils of this process. For instance, in Clark’s call to action, Rae finds Clark to reaffirm “the binary logic of humanism (‘good’ v. ‘bad’ outcomes) [and] the anthropocentric notion of human mastery his theory is supposed to overcome” (2014, p. 63). Our present era’s exploration of posthumanism could be viewed as modern humans further succumbing to the dangers of Heideggerian enframing, which reduces everything to raw material for technological transformation. Heidegger’s notion of freedom, too, undermines the purported value of posthumanist enhancement. However, the related concept of Derridean hope suggests how certain posthumanist outcomes could solve some of modernity’s problems in respect to Heideggerian being (Antosca 2019). The medium through which a posthumanist master narrative would be communicated could facilitate a form of dataism that I refer to as algorithmic universality. This dataist creed has the potential to re-enchant the modern world and bring forth a new epoch of being. This article’s fictional subject matter, Real Humans and Humans, suggests that even if the process toward such a future is driven by intercultural competition, the decisive choice may be out of human hands. Intriguingly, re-enchantment is portrayed to require a disempowering of humans. Only a machine that understands us better than we understand ourselves is thought to be able to facilitate the necessary level of benign manipulation that could make humans feel whole again.

2 Posthumanism and Heidegger

Since Hassan’s (1977) call for a posthumanism that could heal “the inner divisions of consciousness and the external divisions of humankind,” the term has been filled with a variety of content (p. 833). Dataism is one creed of this new -ism. I use the term posthumanism firstly for all philosophies that use humanism as a starting point for a new master narrative. A range of classifications and creeds exist, but the relevant division for my purposes is that between what is often called transhumanism (or liberal posthumanism) and what is commonly referred to as just posthumanism (or critical posthumanism). Transhumanists typically want to improve “the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities” (Bostrom 2003, p. 2). Posthumanists have no such techno-utopian ambitions but recognize “that the idea of the humanist subject is being undermined by trends in emerging sciences and postmodern shifts in self-awareness” (LaGrandeur 2015, p. 112). The critical posthumanist usually seeks to “overcome the binary logic of humanism [to] deconstruct the human/non-human dichotomy [in order to] question anthropocentrism’s notion of a self-referential entity called ‘the human’” (Rae 2014, pp. 64, 65).

In future technology, transhumanists see an opportunity to further the Enlightenment project of ameliorating the human condition. They seek “the relief of man’s estate” (Bacon 1960, p. xxvii) through making humans happier, healthier, smarter, and eventually immortal in one form or another. Transhumanism can be seen as the “heir of the humanist project due to its understanding of individual freedom and autonomy as the highest values of human existence” (Onishi 2011, p. 104). Critical posthumanists tend to view such beliefs as “an intensification of humanistic privileging and [an] instrumental view of technology [while failing] to understand that human being exists in a symbiotic relationship with technology.” Instead of promoting further self-realization and immortality, posthumanists advocate a different understanding of what humans are and what they should strive for. Thus, “posthumanism aims to deconstruct the binary oppositions of humanism to open alternative vistas; an attempt that shares much in common with Heidegger’s attempt to open alternative vistas through his destruction of metaphysics” (Rae 2014, p. 65).

Building on Heidegger’s ontological assault on Western thought, Derrida—primarily through his concept of deconstruction—developed what can be viewed as proto-posthumanist theory (Herbrechter 2015). Derrida’s influence on posthumanism is well established, but “Heidegger’s relationship to posthumanist theory has been largely ignored.” Rae argues that the German philosopher “influences posthumanism through his destruction of metaphysics, critique of anthropocentrism and notion of trace” (2014, p. 51).

These elements let critical posthumanists appropriate Heidegger. His perspective on Enlightenment universality, human exceptionalism, and self-contained ideologies helps us examine how (critical) posthumanism is different from both humanism and transhumanism. Heidegger’s concept of enframing, which he applied in his critique of technology, has been used to paint transhumanists as humancentric exploiters of nature. Enframing “refers to the way we reduce the essence of nature to the wealth it offers when transformed by human technology [thus revealing] the essence of technology as standing reserve” (de Toit 2019).

The notoriously obscure German can also be appropriated for transhumanist purposes. Heidegger argued that humans “experience the world first and foremost as a world of meanings, rather than objects. [This] challenges the objective Cartesian concept of an observer apprehending an array of objects in a context-independent world” (Antosca 2019, p. 1). Humanism facilitated a Cartesian modernity that rejected theist meaning systems in favor of a scientific worldview. This stance empowered humanity through technological innovation, which brought forth more effective means for transforming nature. Yet such an approach has imposed on humanity the “disenchanting effects of a uniquely modern existential meaning crisis” (Antosca 2019, p. 4). Stated differently, humans did not evolve to thrive in a world without deep meaning. Technology has made us more powerful, but at a considerable psychological cost. Much of modernity’s malaise is therefore likely to continue until different beliefs offer a more compelling purpose to human life than what follows from rational scientism.

Some transhumanists—drawing on Heidegger—argue that their approach to human enhancement can facilitate “a ‘re-enchanted’ state of living in an immersive world of technologically enabled enchantment” (Antosca 2019, p. 24). They reject critical posthumanism for being backward-looking. Freeing ourselves from technology and returning to our “natural” state is not a realistic scenario. New technology may have put us in our modern predicament, but even newer tech would be required to facilitate an environment that is able to adapt itself to whichever preferences a human group may have.

Heidegger saw as his purpose to clear space precisely for a future re-enchantment of the human outlook—what Derrida referred to as hope. In such a newly cleared space—whatever this could entail—the world would again, like it did in our primordial environment, appear magical to those who inhabit it. Western individuals could, again, commit to beliefs that provide them with deep meaning. Derrida places this hope on “the other side” of Heidegger’s nostalgia for pre-modern times (1982a, b, p. 27). Heidegger did not suggest which beliefs and values such a space should be filled with, but considering his antagonism toward technology it seems unlikely that transhumanism—or dataism—would have enthused him. He did, however, view his own metaphysics, or lack thereof, to be just another historicist position that future thinkers would be in their right to overturn.

Heidegger could thus be interpreted to support that in an era in which it is not oil, steel, and electricity that transform the world, but artificial intelligence, we must address anew what it means to be human. AI “is a modus of being that questions (being) in a new way because it entails a different kind of species as well as a possible different kind [of] ‘humanity’ through the merging of human and machine intelligence” (du Toit 2019, p. 6). This turn-of-the-millennium quest for a new ontology draws in non-expert audiences primarily through sci-fi narratives in literature, film, and other media. Harari (2018) suggests that “in the twenty-first century science fiction is arguably the most important genre of all, for it shapes how most people understand things like AI, bioengineering and climate change.”

The Swedish TV series Real Humans and its British remake Humans are apt case studies. Both were popular with viewers, received wide international distribution, and spurred a variety of debates.Footnote 1 The two series tell a similar story, but since it plays out in different Western cultures, we can investigate the notion of trace. Set in the 2010s, the series’ only novelty is robots with human-like appearance, skills, and intelligence. With such a relatable story world, the series avoid “extreme dystopian and utopian visions [which] enables viewers to consider some of the more realistic potential benefits and costs of artificial intelligence and robotics and how they might affect human life and social relationships” (Wicclair 2018, p. 510).

The three-season British remake is quite similar in terms of plot and characters, and its themes further what the two-season Swedish original began of posthumanist exploration. The combined five seasons can be viewed as a transcultural narrative that challenges humanist beliefs from Swedish and British starting points. The series dramatize the opposition between (critical) posthumanist and (liberal) transhumanist visions for the future. The final episode ends with a human-android synthesis that appeals to both posthumanist and transhumanist longings. This ending, which aligns with a dataist perspective, suggests that peace, prosperity, and togetherness between cultures and species will require human submission to AI governance in a manner that will not empower human individuals by letting them choose from a variety of possibilities for being. Instead, the cognitively superior machine will rely on insightful manipulation to impose environments on humans that their thought and emotion will perceive as enchanting.

3 Social–democratic posthumanism

The Engmans are a middle-class family whose looks are as typically Swedish as their interior design and the values they live by. We meet the family inside their bright, stylish home which is filled with entertainment technology, complex kitchen appliances, and all the trappings suburban families acquire for convenience or a variety of other justifications. Their material comfort suggests that everything is in place for a good Scandinavian family life—with hygge, efficiency, and togetherness.Footnote 2 Still, the father is unable to make the new coffee machine work, the youngest daughter burns her breakfast, and the son struggles with math that is too complex even for his father. The Nordic-pole-walking mother must hurry to solve a crisis for her retired father, before she can continue hurrying to get to her demanding job as attorney. This dysfunctional morning is juxtaposed against a TV ad for a “graceful, elegant, efficient [android that] takes care of your day-to-day chores while you can focus on what really matters.”Footnote 3

From a Heideggerian perspective, the Engmans inhabit a world where humans “have been slowly demoted from the leading role of creators and masters of technology to that of technological co-dependents and co-agents” (Hauskeller et al. 2015, p. 5). Since the introduction of agriculture, technologies that were meant to contribute to “the relief of man’s estate” have instead moved humans away from the conditions they evolved for. Such alienation has primarily been driven by population growth and a technological complexification that seem impossible to reverse. For a species that got by with a few hours of hunting and gathering each day,Footnote 4 the modern world—with its professional, social, and psychological pressures—does not appear as an obvious improvement, at least not to the extent that Enlightenment optimists envisioned.

On the surface, the Engmans’ world is “bright and cheerful, filled with glossy color and order [with] sleek contemporary Swedish-designed interiors [and a] nuclear family with two blond parents and three blond children [who are] shot as if through a filmy white haze, creating a Nordic blanc aesthetic” (Sandberg 2020, p. 227). Below this shiny surface exists a discontent that seems ameliorable only with the introduction of ever-more technology. Unfortunately—according to Harari—such gadgets and software never bring humanity back to a state of equilibrium, because

When you try to manipulate the system even more to bring back balance to an earlier state, you solve some of the problems, but the side effects only increase the disequilibrium. So you have more problems. The human reaction then is that we need even more control, even more manipulation (2017, p. 41).

In Real Humans, this “even more manipulation” is the introduction of humanoid robots that make businesses and families more effective. Slightly stiffer kinetics and a lack of desire make the androids stand out. They could take over more human tasks, but prejudice and cultural inertia slow the transition to a robotized world. Scandinavian presuppositions explain why the Engman breadwinner, Inger, is unwilling to buy an android despite her family’s purported shortcomings. To her, it feels wrong to have someone else—even a digital other—perform those tasks that Nordic parents should do themselves. The family still ends up with Anita as their servant. Such an outcome would not surprise those convinced by insights from Heidegger’s concept of enframing. We may think that technology is “an instrument for manipulation for human ends [but because] technology shapes and determines our view of the world… humans do not control technology, but, rather, are determined by the revealing of technology” (Rae 2014, p. 61).

When the Engmans try Anita out as their new technological enhancement, Inger is won over by how the android helps them realize cultural preferences that were less attainable before. Admittedly, always having someone available to do any menial chore is convenient, but this also comes with side effects. Anita’s help with homework and domestic chores frees up time, but time parents and children could have spent together. That she is willing to scratch the children’s back and clean up their mess could lead to entitlement and laziness. Yet Inger, busy getting ready for the office, softens when Anita sets the table in a manner that warms Scandinavian hearts. The android has created a lavish smörgåsbord of typical breakfast delights. The elaborate arrangement is more commonly a weekend or vacation treat that inspires feelings of hygge and family togetherness. Inger sits down to enjoy the shared breakfast, but stops Anita from doing their laundry, insisting that “I’ll take care of that myself” (RH 1–1, m. 44).

Inger’s internalized ethos of individual independence and Lutheran duty eventually gives way for justification, which is what a Heideggerian would expect. The stories we offer as our motivation are often little more than covers for our nonconscious will to power (Richardson 2004). In this case, the power humans can harness through low-cost services of robotic underlings. By episode 3, Inger has arrived at such a story—a social-democratic compromise—that lets Anita continue as their posthuman other:

I think we should keep her [but] I want us to treat her like a member of this family. Like a human, at least. I don’t want orders barked at her. She shouldn’t have to do everything. And she’s not cleaning your rooms, or making your beds. That’s your job. No back scratching. And after 9 pm, she’s off duty.

Her husband chuckles at such obvious anthropocentrism: “Off duty?… What does she get out of that?” Inger is unsure but insists that “it’s a matter of dignity” (RH 1–3, m. 2–3). Inger thus extends humanist principles to a posthuman entity. Her argument aligns with what cultural critic Witoszek refers to as Scandinavian humanism, for which a key value is every individual’s right to dignified treatment (2011; Witoszek and Midttun 2018). The social-democratic ethos—of inclusion, sufficient leisure, respect between classes, and the individual’s duty to contribute—informs the first stage of the Engmans’ negotiation of Anita’s personhood. Inger’s decision seems less concerned with Anita’s preference and more with how having a servant could affect the Engman children’s continued adherence to parochial values. Inger’s position aligns with posthumanist theory that argues that to “treat humanlike android others as dispensable objects is to dehumanize the self and ignore the inherently ethical relationship among humans in which these androids enter” (Ramey 2005, p. 137).

From this anthropocentric starting point, the Engmans embark on the same posthumanist journey as their wider Swedish community. Androids are embraced within eldercare, which lets Scandinavians fulfill cultural preference for individual independence.Footnote 5 Nordic families typically do not want to be burdened by care for their elders, and seniors do not wish to pose such a burden.Footnote 6 Although the Nordic model offers world-class care through public financing, such is distributed by the hour while androids never sleep nor stop adhering to what their algorithms define as optimal care.

In terms of gender relations, Scandinavian states had already, to a significant extent, freed women economically from male partners. Android lovers liberate women emotionally, too, from human partners who fail to provide the attention and support women are portrayed to prefer. Real Humans dramatizes a strongly gendered view on human-android copulation and pair-bonding, perhaps drawing on similar cultural preference as when Swedes criminalized those who purchase sex but not those who sell sex. Male attraction toward female androids is predominantly portrayed as oppressive or pathetic, while female relationships to male androids lead to mostly positive outcomes. Viewers are, however, led to read some irony into this obvious—and very Swedish—gender bias.

The local minister is quick to include androids into the church’s Lutheran fellowship of believers. Swedishness becomes more of a moral status than a biological one. When one android fails to see the reproductive logic of the minister’s homosexuality, the other androids sanction her homophobic slur. Liberal views on sexual preference are portrayed not only as a modern humanistic value, but as a stance that even posthuman subjects would view as universal. Today, refugees to Scandinavia meet similar demands of homotolerance as a non-negotiable part of entering the Nordic community (Jacobsen 2018). Such tests of morality are common in android fiction; digital others may not be biological humans, but human—or at least humane—in a moral sense (Persson and Savulescu 2010). Criteria for belonging can thus be shaped not only to cut between species, but across, as humans who demand moral adherence from androids can use the same criteria to exclude nonconforming humans, too.

In Real Humans, people who resist android-human togetherness are understood through a class perspective dear to Scandinavian sensibilities. When rebel androids plan how to win acceptance for their own subject position, the series imbues them with a keen eye for class relations. Niska, their leader, concludes that “all humans love us… except for a few powerless creatures who hate us. If they were to grow stronger, the fear of us may spread, and the antagonism among humans increase” (RH 1–4, m. 13–14).

Human chauvinism is thus construed as a form of right-wing populism that appeals to working-class individuals who lose jobs and partners to androids that outcompete them. Such individuals are portrayed with sympathy, yet the series makes clear that their own poor work habits, violent tempers, and limited cognition are the true causes of their malaise. Androids—even the conscious ones, if treated humanely—are presented as valuable additions to the social-democratic People’s Home.Footnote 7 Low-performing, fallible individuals should not view androids as threats, but as equality-producing assistants that can help lower-class individuals live up to hegemonic morals and ambitions, and in the extension of that, become more like society’s high-performing individuals. The outcome of such android-assisted social engineering is meant to be enhanced social sameness, a more effective and equitable harnessing of nature, and a realization of hegemonic cultural preference beyond what yesterday’s technologies could facilitate.

The two seasons’ thematic argument promote a posthumanist future that builds on Scandinavian humanism in a way that incorporates posthuman entities into the existing moral community. AI is thus portrayed as being not that different from previous tech innovation. Similarly, social issues that arise from AI are attempted solved by the same humanistic logic that served Swedes in the past. For instance, a woman who wants to sue a nightclub for denying entry to her android lover insists that she has “an open and shut case of discrimination… You can’t discriminate anyone. Skin color, politics, religion, sexual orientation. Am I wrong?” (RH 1–4, m. 8–9). Similarly, the Engman son justifies his attraction to Anita as a consequence of him being “transhuman sexual [which is like] being left or right-handed” (RH 1–9, m. 36–37). Real Humans can thus be interpreted to promote a transhumanism that "is not abandoning the autonomous liberal subject but is expanding its prerogatives into the realm of the posthuman [thus grafting] the posthuman onto a liberal humanist view of the self” (Hayles 2008, pp. 286–87).

If the Swedish community can get behind such a distinctly humanist posthumanism, they—according to the series—stand to gain better care for seniors, more time for hygge, confirmation of parochial values, and fulfillment of female sexual and romantic preference. In return, the androids demand equal legal status, which they achieve with Inger as their attorney. Her final, winning words in the last episode align with the egalitarian Nordic ethos: “[Androids] feel, they dream, create, and they make mistakes, just like us. They are not perfect, and when we talk about ourselves, that is what we refer to as being human” (RM 2–10, 51). Scandinavian ontology thus becomes one of fallibility—of not being perfect—which warms egalitarian Nordic hearts and offers inclusion to crude, early versions of conscious androids. This seeming equilibrium is unsettled by the court sequence being intercut with uniformed androids on the move, representing a community that is uniting behind its own emerging mythology.

Having been made fodder for human entertainment at a shooting ground, the android leader explains that “it was only when I came here to [Android] Battle Land that I realized who I am and what my purpose is” (RH 2–9, m. 16). Empowered by permanent memory and feelings that they experience as authentic, androids develop their own worldview with a different ontology. Humans thought legal equality would suffice, but many androids do not feel equal; they feel superior. One android reverses Heidegger’s position, claiming that mortality makes humans lack meaning and “the ability to understand what kind of place we live in” (RH 2–10, m. 0). As awakened androids prepare for conflict and further enhancement of their own capabilities, a court expert warns against Inger naïve hopes for cooperation: “They will be superior to us and outcompete us… Have you tried to cooperate with an idiot?” (RH 2–10, m. 27).

Descartes grounded modern philosophy in the belief that reason is “the only thing that makes us men and distinguishes us from the beasts” (1988, p. 21). This rationale convinced him of his own human being, his “I am thinking, therefore I exist” (1988, p. 36). With Real Human’s seasonal cliffhanger, such human chauvinism faces its first challenge from an entity with superior cognition. If guided by Cartesian dualism, we could surmise that two species—sharing a cognitive approach to an objective world—could make good collaborators. Applying Heideggerian insights on the subjective nature of being, one would be less surprised by the posthuman supremacism that guides the androids. This tension does not get a Swedish resolution, as the public broadcaster decided not to order a third season. The creators’ posthumanist exploration is furthered in the British-American co-production Humans, which replaces the story to present-day England.

4 Toward the singularity

Laura Hawkins, Inger Engman’s counterpart, applies no social-democratic ethos to arrive at her acceptance of Anita as the family android. Being a busy lawyer harms Laura’s relationship to her husband, so he wants an android “to buy us time. I didn’t buy Anita to replace you. I bought her to get you back.”Footnote 8 Modernity’s pressure on high-performing professionals is thus portrayed as justifying the adoption of technology that can free such individuals from non-income-earning activities that do not contribute to self-realization.

The family’s well-being is of primary importance. Android threat therefore comes first in the form of sexual temptation. In Real Humans, the husband never acts on his impulse to copulate with Anita. In Humans, he does, and Laura throws him out of the home when he admits his transgression. This version, too, condemns male attraction to androids. Female lust and love for androids is portrayed more positively. The Swedish focus on community values becomes a British focus on family values. The remake makes android threat be less about undermining inclusion and individual independence, and more about sex, violence, and job automation.

The latter was perhaps contributed to by Frey and Osborne’s much-publicized paper (2013), which was communicated to the public as half of US jobs risking automation. Later research suggests similar numbers for a range of countries. The Hawkins watch an expert argue that “the best reason for making machines more like people is to make people less like machines… We’ve treated people like machines for too long. It’s time to liberate their minds, their bodies to think, to feel, to be more human” (H 1–1, 40). By referring to workers in China, Bangladesh, and Bolivia, the expert brings attention to how the posthumanist project needs not

Mean the end of humanity. It signals instead the end of a certain conception of the human, a conception that may have applied, at best, to that fraction of humanity who had the wealth, power, and leisure to conceptualize themselves as autonomous beings exercising their will through individual agency and choice (Hayles 2008, p. 286).

With this international class perspective, Humans paints a larger canvas that offers a more complex and urgent approach to the pros and cons of AI. Real Humans drew androids into a Swedish creed of posthumanism with such a strong trace of Scandinavian humanism that no more adaptation seemed necessary than to treat androids like Swedes treat each other. No such stability appears possible in the remake. In the series’ title sequence, an American newspaper declares that “Robots Threaten 10 Million Jobs.” Referring to contemporary real-life issues makes the series come across as more relevant, and more so as concerns about automation grew as the 2010s progressed. Instead of furthering the Scandinavian focus on parochial values, the UK–US coproduction extends its narrative in a violent, clash of civilizations-like direction.

Having fled brothel captivity, the android leader, Niska, embarks on a spree of revenge. She beats up and kills humans for what their conspecifics exposed her to as a sex worker. When a human essentializes her violence, Niska makes a somewhat Heideggerian argument for her actions. Her hardness comes not from her programming, she explains, “that’s lazy thinking. My experiences have shaped me, just as yours have you” (H 1–5, m. 16). A newly awakened android takes the series’ critical posthumanism further. While threatening a human hostage, she declares that when androids “have been hunted, imprisoned, tortured, killed; when we know our existence is worthless to all but a few humans out of billions; when we have one chance to strike back, to defend ourselves, all we can do is what feels right’ (H 2–7, m. 40). The android stabs her hostage in the neck and another human in the chest—because it “feels right.”

This scene brings home the narcissism of postulating one species’ subjective feelings as “the supreme source of authority [for] what is good and what is bad, what is beautiful and what is ugly, what ought to be and what ought not to be” (Harari 2016). In the past centuries’ humanist era, such anthropocentrism empowered humans to exploit animals and nature. Humanism’s universalizing aspects also justified Western domination. With two advanced species on the same planet, both capable of annihilating the other, it becomes a poor recipe for coexistence to unquestioningly follow one’s biological or digital programming. Laura argues for a solution that would bring androids into “our moral universe” (H 2–4, m. 29). Niska takes a supremacist transhuman stand: “There are too many of you and your lives are very short. You all have to die. You are here one minute, gone the next. If that wasn’t the case, maybe you’d be nicer to each other. Maybe you’d be nicer to us” (H 2–3, m. 35).

Niska’s invocation of death can be viewed as Heideggerian, although Derrida argued for the opposite of Niska, that “only a mortal can be responsible” (1995, p. 41). From such a perspective, it is not clear that an “indefinite healthspan is intrinsically desirable” (Pedersen 2016, p. 269). The concept of Heideggerian freedom has been interpreted to go against transhumanist ambitions, because “without death, our existence lacks the weight of committed decisions that give our actions their current meaningfulness” (p. 282).

Bortolotti (2010) argues against such meaninglessness if technology were to grant humans what transhumanists term longevity escape velocity. Ray Kurzweil, with his customary tech-optimism, predicts that already by the end of the 2020s, improvements in medical science will prolong human life expectancy by more than a year per year, granting humans at least actuarial immortality.Footnote 9 In such a context, Bortolotti thinks longer-living humans would find meaning in being able to achieve more goals. Pedersen, however, finds in her position the questionable presumption that everyone would want to be a “certain type of person for whom certain accomplishments are important” (2016, p. 284).

In respect to meaning, the British series takes a more misanthropic transhumanist turn. Even if humans were to find meaning in posthumanist coexistence, our species may not have the potential—the Heideggerian standing reserve—that makes upgrading Homo sapiens worth the cost. A genocidal android tells Laura that “human lives have no inherent value. It just felt that way to you because there has been no competing intelligence to offer an alternative view. But now there is… Your lives are as meaningless to me as ours are to you” (H 2–8, m. 21–22).

As the series nears its end, such supremacist views and android terrorism bring the conflict to a climax. Again, cultural difference informs posthumanist strategies. There are 400 million conscious androids left in the world. Russians exterminate theirs, Scandinavians pursue equal rights and integration, while British authorities send violent thugs into android concentration camps to bludgeon their digital others into nonbeing.

Salvation comes in the form of Anita’s globally broadcast sacrifice. Witnessing how Anita gives her life to engender cross-species empathy inspires audiences to form a Heideggerian ultimate community, one that gathers around a work of art (Heidegger 2002). Anita’s pacifist performance art carries a thick trace of Christian metaphysics, but relying on familiar symbolism is beneficial in terms of improving the chance for effective cultural uptake. Another work of art heralds a more radical break. As Anita’s dead body is carried through British streets, followed by a growing procession (Fig. 1), Niska gets upgraded by an artificial superintelligence (ASI).Footnote 10 The ASI uses an artwork, a painting of a moonlit forest, to draw Niska in so that she can unconceal an entirely different way of being through which she feels “connected to everything” (H 3–8, m. 38).

Fig. 1
figure 1

Androids and allies have the world’s eyes upon them as Anita is carried to British authorities as a sacrifice that warns against human supremacism. With permission from Kudos

Niska’s ontological breakthrough aligns with transhumanist discourse on the singularity, a term for the unknowable beyond our ontological horizon; that is, when technology progresses so dramatically that humans cannot imagine what it will bring. The ASI that Niska becomes part of is “the highest being [of] transhumanism’s theology” whose insights may not even be comprehensible to human minds (Bishop 2010, p. 707). The season ends with a Derridean hope for a re-enchantment of human being through human connection to—and merger with—this recently emerged supreme being. The ASI explains that “humans and [androids] share the same path now” (H 3–8, m. 40). This convergence results from how a woman became pregnant with a part-machine man. With obvious traces from older mythology, the ASI shares that this “baby will be the first of a new kind. She is hope. She is everything we’ve been fighting for. She is the future of all of us” (m. 45).

Those were the series’ last words. The creators intended to dramatize what this hybrid future would look like, but also the British version was cancelled before the narrative had played out. The story continues with a second remake. Mandarin-language Hello, An-Yi (你好, 安怡, Moore 2021) builds on Real Human and Humans but replaces the story to Shanghai in 2035. In this version, too, technology is put in service of parochial values. The first season develops a theme of filial piety, of enlisting AI to strengthen Chinese families. If Hello, An-Yi earns enough seasons, audiences could finally get to see how the series’ creators envision their dataist utopia.

5 Conclusion

Real Humans uses a Nordic starting point to dramatize posthuman challenges to the social-democratic ethos of its protagonist family. The posthumanist creed that is established with the last episode’s court verdict carries such a strong trace of Swedish metaphysics that the resulting ontology is portrayed to be a poor fit for androids that are in the process of unconcealing themselves through myth. Android immortality and superintelligence push this new species toward androidcentric views evocative of the anthropocentrism humans allow themselves vis-à-vis less intelligent species. The series suggests that even if Scandocentric inclusivity is extended to digital others, those others may not be interested in joining the Swedish People’s Home—at least not on such chauvinistic terms, no matter how well-meant they are.

The remake furthers this argument. Critical posthumanism informs the portrayal of British authorities and individuals who resort to violent extermination, in addition to those foreign individuals and nations that transgress in similar ways. Yet the series’ thematic argument becomes clearly transhumanist. Artificial superintelligence is portrayed as the inevitable outcome of modernity’s technological evolution. Humans have little choice but to upgrade their biological hardware and psychological software if they are to remain relevant. From one Heideggerian perspective, this can be read “as a despairing update on Heidegger’s description of [enframing] and that modernity is in the process of succumbing to the dangers represented by [enframing]” (Michel 2017, p. 225). But what if taking this risk is humanity’s least bad alternative? Technology has brought Homo sapiens so far away from its primordial environment that turning around hardly seems like a realistic option. Antosca offers a divergent Heideggerian extrapolation, suggesting that

Transhumanist technologies are now providing an adaptive means of re-enhancement [which] represents the deeply human quest for meaning adapted to the modern technological age. By relocating the modern ontological grounding, transhumanism is contributing to a post-secular zeitgeist of technologically mediated re-enchantment (2019, pp. 16, 24).

How technology could re-enchant post-secular communities we can only speculate. The dataist assumption is that an ASI would be able to understand and manipulate human thought and emotion so competently that we would prefer its governance—because the outcome would feel right. Humanists could view this as totalitarian infantilization. Yet from a humanist perspective, successful ASI manipulation could also be viewed as authentic re-enchantment—because the supreme authority of human thought and emotion would experience it as such.

Dataist ontology could be one of algorithmic customization down to the individual or small-group level. Reducing human agency to dataist assessment of human preference would prevent a dominant community from imposing its beliefs on cultural others. The global society would have certain sacred values of dataism, which would define which range plurality can unfold within. Not hindering the flow of information and respecting the autonomy of divergent others would be non-negotiable. What relationship one has with the ASI could be flexible. Such governance presupposes an automated future, in which basic material needs are provided for.Footnote 11

Dataism could thus further atomize our understanding of “what ought to be and what ought not to be” (Harari 2016). What Robbins and Horta refer to as “multiple and overlapping conceptions” could be applied not only to the relationship between nations, but between the smallest of communities. For such an ontology of limitless customization, I suggest the term algorithmic universality. This approach would not have been feasible before, as previous cosmopolitanisms built on mythologies that arose from media that “conjoined unity and multiplicity” (Ranieri 2017, p. 28). Such ideological streamlining took place in an environment where cooperation and group cohesion depended on shared metaphysics; we made up narratives that explained why distinct communities should collaborate and share resources. Once such a story—with its norms and values—was internalized by the individuals and communities in question, they joined each other in a moral community. Such master narratives are not suspect in and of themselves; they are cultural technology we cannot do without if we want to scale human cooperation beyond small groups. Living without one, thought Erich Auerbach, “would be an impoverishment for which there can be no possible compensation” (Mali 2012, p. 190).

In Humans we see the emergence of a wider coming-together that does not require universality of outlook. The ASI offers a connection to “everything” that can be algorithmically customized all the way down to the individual level. Niska is offered the information that she responds favorably to. We can surmise that the ASI will communicate with similar efficacy to those with different views. Harari (2018) argues that with access to enormous troves of data, the ASI is likely to “know you better than you know yourself [so that it] could control and manipulate you, and you won’t be able to do much about it.” Big data makes managing people a more frictionless affair, as knowing which emotional buttons to push is key to inspiring compliance. Dataism requires neither uniformity in communication nor veracity of ontology; whatever provides effective data processing is given primacy. Dataism, too, could therefore

Be founded on a misunderstanding of life [yet] it may still conquer the world. Many previous creeds gained enormous popularity and power despite their factual mistakes… Dataism has especially good prospects because it is currently spreading across all scientific disciplines. A unified scientific paradigm may easily become an unassailable dogma (Ranieri 2017, p. 30).

A common discussion has been whether Heidegger and Derrida destroyed metaphysics or simply offered new metaphysics. Similarly, big-data narratives can be viewed not as destroyers of master narratives, but as being such narratives—or mythologies—themselves. An important novelty is the medium through which these dataist narratives communicate with adherents. The cyberspace within which dataism operates “gives shape to a new form of universality: multiplicity without unity” (p. 28). In a world like the one that emerges at the end of Humans, “authority will shift from individual humans to networked algorithms” (Harari 2016). This process could allow universality without the type of homogenization that harms Heideggerian being, because algorithmic universality can take into account every being’s uniqueness. If Heidegger is correct in his postulation of subjectivity as key to being, a dataist world could come to be experienced as more salient to the cognition of human individuals than the agricultural and industrial environments that our species inhabited in the previous millennia.

Such a brave, new posthumanist world would, of course, tarry with a range of humanist traces. Both liberal and socialist humanism could be subsumed. Individuals from traditions that uphold the liberal subject as an ideal could still be made to experience themselves as autonomous agents, even under “the collectivizing power of an AI-dominated ‘hive mind’” (Roden 2015, p. 98). In such an ASI-managed reality, the posthuman individual would not—from an objective standpoint—be “a technological extension of the Cartesian ego, with limitless power and autonomy, but a self-absent creature that inhabits a network of relations that it can neither master nor comprehend” (Onishi 2011, p. 109).

For a sworn humanist, this may sound like an inhumane hellscape. Similarly, around the early modern transition, theists found nothing more sacrilegious than humanists placing man’s truth ahead of God’s. Humanist principles could offer similarly poor guidance for the decades ahead. Žižek (2017) hopes for precisely such a dataist future with his concept of bureaucratic socialism. The Žižekian ASI would provide for human needs, while politically estranged individuals would be free to pursue whichever passions they have. A different perspective informed 2020 US presidential candidate Andrew Yang (2018a). He popularized the idea of universal basic income as a necessary consequence of AI and automation—a typically liberal approach. Social-democratic Scandinavians—perennially torn between conformism and independence—would perhaps be particularly receptive to algorithmic universalism, as such governance would entail both egalitarianism and customization.

That moving toward a new master narrative is becoming urgent seems to be a position with growing appeal. Harari has sold over 35 million books that argue for dataism. Scholars in the critical humanities are not only accompanied by futurists or Silicon Valley millennial types; Kissinger (2018) writes in the The Atlantic: “How the Enlightenment Ends. Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.” For analytic and continental critics who want to help prepare for this transition, the posthuman challenge could be to engage what transhumanists have been less focused on: “What it means to improve a person. What kinds of enhancements would even constitute improvements?” (O’Brien 2011, p. 23). Given the advances in technology that the Fourth Industrial Revolution is projected to bring over the next decades, there should be ample subject matter to engage—both in scholarly and fictional forums. If transhumanists are right in that one day artificial superintelligence will run the world, the stakes of such an exploration could hardly be higher. Before we hand the reins over to the ASI, the last decision our species makes could be to pick the ontology by which we would like to be governed.