Introduction

The outside world is too small, too straightforward, too truthful, to hold all that is contained in one man.

Franz Kafka, November 13, 1912, in a note Felice Bauer’s birthday.

The launch of ChatGPT, an AI chatbot capable of generating previously unfathomable levels of complexity and variety of human-like text, sparked debates concerning whether we are at the dawn of a new age of writing and meaning-making (Gretzky 2024; Mishra and Heath 2024). Rather than weighing in on the potential impact of ChatGPT and other generative artificial intelligence tools (hereafter, GenAI), this paper engages with the implicit and deep-seated sociotechnical imaginaries underpinning reactions to GenAI. Put briefly, sociotechnical imaginaries are collectively held and publicly promoted visions of the future made possible through scientific and technological progress (Jasanoff and Kim 2015). As with respect to any technology, GenAI’s impact depends not only on the technology itself but also on its interplay with the sociotechnical imaginaries underpinning its development, spread, and appropriation (Natale and Ballatore 2020; Richter et al. 2023). Crucially, sociotechnical imaginaries are not monolithic or static, but rather evolve through contestation among differing perspectives (Bareis and Katzenbach 2022; Rahm and Rahm‐Skågeby 2023). One particularly evocative source for sociotechnical imaginaries are fictional texts, which often shape our shared imagination of possible and probable futures (Cave and Dihal 2019; Hudson et al. 2023). Accordingly, to explore the roots of evolving sociotechnical imaginaries of meaning-making in light of the emergence of GenAI, I analyze two seminal works of fiction: Mary Shelley’s Frankenstein (1818) and Franz Kafka’s The Trial (1925).

Mary Shelley’s Frankenstein (1818) is regarded as an enduring myth that has shaped sociotechnical imaginaries around artificial life (Falk 2021; Musa Giuliano 2020). This paper sets out to explore the limitations of such a sociotechnical imaginary with respect to the features of GenAI and the social structures in which it is implicated. To do so, I appeal to another influential fictional text—Franz Kafka’s The Trial (1925)—which has often been positioned as reflecting the metaphysical and social challenges of the modern world (Arendt 1944; Deleuze and Guattari 1986; Solove 2001). Though less often explored in the context of technology in general and AI or education specifically (for an important exception, see Prinsloo 2017), I suggest that The Trial offers some key insights for thinking about GenAI’s role in meaning-making.

The paper starts out by shortly discussing what is novel about Generative AI, followed by an introduction of the concept of sociotechnical imaginaries. I then unpack the sociotechnical imaginary of AI implicit in Frankenstein along three axes—(i) agency, (ii) relations, and (iii) control—shortly illustrating how these aspects are manifested in common reactions to GenAI. The main part of the paper delineates how The Trial could broaden current conceptualizations of GenAI’s role in meaning-making, suggesting that it aligns with a more-than-digital view, which challenges three key dichotomies underlying the Frankenstein myth: internal–external, process-outcome, and choice-coercion. Put briefly, much like Kafka’s letter to Felice, The Trial shifts our orientation from the Frankensteinian depiction of AI as an external actor with which humans struggle, to a focus on an endless process of humans and GenAI striving to ‘decipher’ each other, further blurring the entanglement of machines in evolving human practices of seeking and making meaning.

Generative Artificial Intelligence and Potential Shifts in Meaning-Making

At its core, GenAI employs machine learning techniques to discern and internalize the intricate patterns in vast datasets. It then leverages these to produce human-level content across mediums like text, visual art, music, programming, and beyond. The novel aspect of GenAI is that it autonomously generates original outputs by capturing the underlying statistical patterns rather than merely copying the specifics of its training material (Ouyang et al. 2022; Miao and Holmes 2023). The results are reminiscent of human level outputs, though with the scope and speed that only artificial systems can deliver. Here, it is vital to distinguish two stages in GenAI model development: pre-training and fine-tuning. During pre-training, GenAI learns from vast datasets to identify patterns and generate similar content. This stage is marked by its unstructured approach and unpredictability. In fine-tuning, the model’s capabilities are tailored by applying targeted reinforcement learning (usually via human feedback) to refine and direct its behaviors towards more specific contexts and outcomes (Ouyang et al. 2022; Sharma et al. 2023).

GenAI therefore signals a paradigm shift, one where algorithms move beyond analysis and optimization to forging their own representations in a manner we once thought were exclusively within the realm of human expression (Gretzky 2024; Miao and Holmes 2023). Critically, one of the key innovations of ChatGPT was not its technological capacity but the simple to use interface, and human-like interactions (Pons 2023), features that have become taken-for-granted in other general LLMs such as Claude and Gemini. These capacities for production and communication sparked debate about whether GenAI displays human-like intelligence or gets us substantially closer to the holy grail of AI research—artificial general intelligence—broad, flexible machine intelligence that could match or even outperform humans across a wide range of intellectual tasks (Giannini 2023).

Sociotechnical Imaginaries and Fictional Narratives

Understanding the evolution and impact of GenAI cannot be limited to attending to its technological capabilities. Instead, it requires scrutinizing the (often implicit) assumptions concerning their potential contributions and pitfalls, as well as desirable modes of use (Mishra and Heath 2024). This implies moving away from determinist arguments concerning technology’s inevitable impact and attending to how technological features are intertwined with discursive characteristics that shape how technologies are depicted, promoted, and enacted (Dishon 2024; Bareis and Katzenbach 2022; Fawns et al. 2023). Notably, even the term ‘artificial intelligence’ has vital implications, highlighting so-called human-like attributes of computational technologies (Natale and Ballatore 2020).

How technologies are framed relies on broader views of desirable social arrangements, what Jasanoff and Kim (2015) defined as sociotechnical imaginaries: ‘collectively held, institutionally stabilized and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’ (4). These imaginaries lay out desirable (or dystopic) visions of the future that play a vital role in shaping current technology development and use. In contrast with similar concepts such as master narratives, sociotechnical imaginaries are not homogeneous or static, but rather evolve over time and in light of contestation between different views (Bareis and Katzenbach 2022; Jasanoff and Kim 2015; Rahm and Rahm‐Skågeby 2023). Further, sociotechnical imaginaries go beyond an analysis of language or discourse to include how visions of social life are materialized and performed through technology (Jasanoff and Kim 2015). Attention to diverse and contested imaginaries is pivotal in the case of novel technologies, such as GenAI, that are still in the stage of interpretive flexibility, in which various depictions struggle (explicitly or implicitly) for establishing the dominant framing of a given technology (Ramiel and Dishon 2023; Natale and Ballatore 2020; Richter et al. 2023).

Stories, narratives, or myths are uniquely evocative means for circumscribing current engagement with technological developments, as they offer simple and easily communicated views of technology’s potential and pitfalls (Bareis and Katzenbach 2022; Cave and Dihal 2019). Therefore, narratives could be conceptualized as the basic building blocks of more complex sociotechnical imaginaries (Sartori and Bocca 2023). Exploring fictional narratives is particularly imperative with respect to GenAI for several reasons. First, GenAI is often depicted as a critical step towards general or strong AI, eliciting age-old narratives concerning humanity’s aspiration to create artificial life. Second, the actual developers of AI have been preoccupied with fictional narratives concerning the creation of life, which have shaped their technological ambitions and design choices (Musa Giuliano 2020; Natale and Ballatore 2020). Finally, GenAI is still in its early stages where sociotechnical imaginaries are likely to be shaped by narratives and myths due to the lack of concrete experiences (Hudson et al. 2023; Richter et al. 2023). Therefore, attending to fictional narratives facilitates a more in-depth understanding of current discourse, while also potentially allowing us to examine how technologies could be imagined otherwise (Falk 2021; Mishra and Heath 2024).

Frankenstein and Rampant Sociotechnical Imaginaries of Artificial Intelligence

Frankenstein has been repeatedly acknowledged as a critical myth underpinning modern perceptions of human–machine relations (Cave and Dihal 2019; Prinsloo 2017). Famously labelled by Asimov (1950) as the Frankenstein Complex, this novel is associated with the sociotechnical imaginary of intelligent machines uprising against their human creators. This paper is less interested in the notion of machine uprising and instead aims to unpack the features of meaning-making in the often taken-for-granted Frankensteinian sociotechnical imaginary along three axes: agency, relations, and control.

Agency

The most salient characteristic of artificial life in Frankenstein is that it is anthropomorphized—characterized by human-like agency. The creature brought to life by Victor Frankenstein is never named in the novel, yet despite its monstrous appearance, the creature’s thoughts, emotions, and desires are all painfully human (Botting 2001; Shuffelton 2018). The creature’s recollection of his memories since his ‘birth’ are all essentially human-like experiences in terms of the sensual input and the emotional and cognitive experiences. Thus, artificial life is portrayed as a discrete entity, largely mirroring human agency. This mirroring is quite literal in the novel as the creature’s views of the world and his own subjectivity are mainly based on observing humans—secretly spying on the De Lacey family for several months—and by reading novels.

Though AI is depicted as possessing superior capabilities—both physically and mentally (the creature quickly learns to talk and read)—the overall logic governing pattens of meaning-making remains stable. This overall similarity serves as the background according to which certain differences can be appreciated and highlighted. Hence, it is not so much the rationale of agency or meaning-making that shifts, it is the actor who holds the privileged author position—the fear that machines will replace humans is embedded within current structures of meaning-making.

Relations

This anthropomorphic depiction of AI sets the tone for imagining human-AI relations as developing along similar lines to human relationships—personal interactions with a discrete subject that is mostly human in its organizing logic. In fact, it is exactly its humanity that leads AI to pursue one of the most human modes of conduct—free themselves from their inferior position and dominate their environment and any other species in it (Cave and Dihal 2019; Falk 2021). Though Frankenstein’s legacy centers on the inevitable clash between humans and AI, the novel paints a more complex picture. Before creating the creature, Victor Frankenstein expects a glorious outcome: ‘A new species would bless me as its creator and source; many happy and excellent natures would owe their being to me. No father could claim the gratitude of his child so completely as I should deserve theirs.’ (54). Yet, when he finally succeeds, he is immediately repelled by his creation: ‘I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.’ (59) Stunned, Victor abandons the creature, hoping that the whole affair was just a nightmare.

The key point is that the creature’s hideousness has already sealed his fate, as well as his relations with his ‘father’ and humans more broadly. As I elaborate in the next section, the question of what shapes human-AI relations is at the heart of the novel; for now, I highlight the temporal feature of these relations—the moment of creation is treated as the most pivotal point, an irreversible feat to which all later actions merely react, striving to attenuate the damage that has been done.

Control

The creature in Frankenstein is portrayed as having an inherently repulsive physical form that automatically elicits feelings of horror and aversion from anyone who sees him. This raises questions about humans’ control over their interactions with AI. In fact, the novel could be read as revolving around the question of human responsibility for AI, and more broadly, humans’ need to care for their creations (Botting 2001; Latour 2011).

Victor deserts the creature immediately after its birth, an act that the creature views as the source of his misery, and the motivation for his revenge. When the creature asks Victor to create a mate for him in order to save him from his loneliness, threatening to kill Victor’s family if he does not comply, Victor remains unwilling to fulfill this desire, worried that it would infinitely increase the dangers posed by artificial life. In contrast, the creature vehemently argues that it is not his own nature, but his creator’s actions that led to tragedy:

Remember that I am thy creature; I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and good; misery made me a fiend. Make me happy, and I shall again be virtuous. (114)

While Victor maintains a determinist view of the creature’s nature, the creature views his fate as contingent on Victor’s choices. More broadly, the creature suggests that Victor’s actions and choices shape his development and their relations. From this perspective, Frankenstein can be read as centered on the education, or lack-there-of, of artificial life by humans (Latour 2011; Shuffelton 2018). Still, even within such a view, humans cannot control AI’s nature once it is created. At best, they can shape it through human-like interaction and education.

Generative AI in the Shadow of the Frankenstein Complex

The main features of the Frankenstein sociotechnical imaginary were reflected in mainstream media responses to the launch of ChatGPT. Pundits portrayed GenAI as potentially heralding the birth of artificial life. This anthropomorphism was pervasive in sensationalized accounts of GenAI going beyond the boundaries of its intended programming (e.g., Roose 2023). The worry was that once AI is ‘brought to life,’ it might be too late to respond to its all but inevitable uprising. Thus, calls for halting or slowing down the development of AI systemsFootnote 1 echo a sociotechnical imaginary according to which AI is an external actor, whose relations with humanity are largely determined before its birth, and which we will have little control over due to AI’s superiority. In line with the Frankenstein complex, it is assumed that though AI will introduce a new form of intelligence, its actions will be guided by a human-like desire for domination. Thus, these sociotechnical imaginaries postulate a world similar to our own, yet one in which humanity loses its status at the top of the (no longer organic) food chain.

This anthropomorphism is also connected to the characteristics of GenAI. Despite its black-boxed nature, GenAI’s most captivating feature is that its outputs seem human (Giannini 2023; Pons 2023). Moreover, GenAI materializes very concrete aspects of the Frankensteinian sociotechnical imaginary. Much like the creature in the novel is built from an amalgamation of human parts, so does GenAI generate content by reorganizing human thought and language. Further, the creature learns about the world by observing humans, and GenAI learns by the very literal act of scouring human data and trying to identify its underlying logic. Thus, the view of AI as mirroring human forms of agency is reflected in the technical nature of current GenAI systems. In fact, it could be argued that GenAI does not aspire to create the most advanced intelligence but is rather focused on outputs that have the semblance of human authorship (Natale and Depounti 2024). Consequently, it is easier to understand why GenAI is assumed to replicate human logic, while improving on it due to its superior ‘physical capabilities’—its capacity to instantaneously produce endless variations of human-level texts. Such replicatory mechanisms are also at the heart of worries concerning AI developing human desires and behaviors, from love to domination. Critically, these worries are based the Frankensteinian logic—GenAI does not introduce here new modes of thinking or meaning-making, it is a threat to humanity exactly because of its all-too-human nature.

Kafka and the Search for Meaning (Making)

Whereas Frankenstein has become the dominant sociotechnical imaginary underpinning engagement with AI, The Trial might appear like a less obvious source. Though not often connected to technology per se, The Trial has been portrayed as capturing the fraught nature of man’s effort to make sense of an obtuse world, both metaphysically and in the context of modern societies and bureaucracies (Arendt 1944; Benjamin 1969; Canetti 1974; Deleuze and Guattari 1986; Munro and Huber 2012). More rarely, researchers have explored the relevance of The Trial to navigating technological systems, mainly with respect to challenges of privacy and power (Solove 2001; Solove and Hartzog 2024). In educational research, Prinsloo (2017) already made the connection between Frankenstein and The Trial, focusing more specifically on the issue of algorithmic decision-making. Prinsloo acknowledges Frankenstein as key to understanding human worries that our creations might turn against us. To this, he adds the important conceptualization that The Trial reflects the concrete challenges and experiences of navigating an 'algocracy'—an increasingly algorithmically governed world. In what follows, I develop and expand Prinsloo’s (2017) lines of thought, examining more closely how the patterns of meaning-making characterizing interactions with the court in the novel illuminate an alternative sociotechnical imaginary for thinking about GenAI’s impact on practices of meaning-making.

Agency

The Trial takes place during a year of its protagonist’s life—starting with Joseph K.’s arrest on his 30th birthday and culminating in his execution a year later. When K. tries to inquire about the reasons for this arrest, the men who arrest him flatly acknowledge that they do not know:

As to whether you’re on a charge, I can’t give you any sort of clear answer to that, I don’t even know whether you are or not. You’re under arrest, you’re quite right about that, but I don’t know any more than that. (15)

Not only does this bizarre state of affairs—of K. not knowing what he is accused of—persist throughout the novel; the question of the actual crime becomes secondary to K.’s efforts to navigate the intricate bureaucracy of the court in which he is being accused (Solove 2001). The shift from the question of the crime to the mechanics of dealing with the court is central to the unique model of meaning characteristic of the novel (Benjamin 1969). The Trial follows K.’s futile attempts to interpret the court’s rationale and intentions (Prinsloo 2017). Yet, the court and its representatives are devoid of agency in the ways both K. and the reader expect it to present.

Instead, the court is constantly echoing and subverting the agency of the accused (Canetti 1974). Though K. is focused on deciphering the court’s logic, it seems as if K. himself is driving the unfolding events. For instance, when K. is first summoned to court, he is not given a time or a specific room at which he should arrive. Trying to devise a way to search the different rooms, he makes up a cover story about a joiner named Lanz:

He still felt unable to ask for the investigating committee, and so he invented a joiner called Lanz… so that he could ask at every flat whether Lanz the joiner lived there and thus obtain a chance to look into the rooms. (43)

Surprisingly, on the fifth floor a woman asserts that Lanz is inside one of the rooms, only for K. to realize that this is the court, where he is scolded by the magistrate for his tardiness. Though K. views the court as governing his fate, it seems to concurrently reflect or subvert his inner world. This logic is explicitly put forward towards the end of the novel, when K. arrives at the cathedral for a work-related issue, only for the priest, who also serves as a representative of the court, to suggest that he had summoned him there. K. wonders what the courts wants of him, to which the priest responds: ‘the court does not want anything from you. It accepts you when you come and it lets you go when you leave.’ (264).

In contrast to the human-like and anthropomorphic agency of Frankenstein, The Trial offers a distinctively different model. The court does not work according to a well-defined legal system or by reference to truth. Instead, it is guided by its relations with the subjectivities of the accused (Benjamin 1969; Munro and Huber 2012). Though the court is positioned as The source of meaning-making, it does not include an identifiable author or decipherable forms of agency. In fact, through the constant echoing and distortion of K.’s own agency, the court dissipates the very notion of a discrete and identifiable agency. The boundaries between the inner and outer world are porous; it is not clear to what extent the outer world reflects K.'s thoughts or plays on them. This ambiguity is one reason that the novel has invited both existential interpretations and ones focused on the distorted logic and inscrutability of modern bureaucratic systems.

Relations

This lack of identifiable agency also dictates the relations between humans and the court. The court in the novel is portrayed as an entity individuals cannot completely evade, nor can they directly interact with in intelligible ways (Solove 2001). The different characters are sentenced (pun intended) to constantly strive to interpret and influence the court, yet without ever having a clear sense of the appropriate ways to do so, or their relative success (Benjamin 1969). Block, a fellow defendant explains the matter to K.:

I was being made to suffer in many different ways but there was still not the slightest sign that even the first hearing would take place soon. So I went to the lawyer and complained about it. He explained it all to me at length, but refused to do anything I asked for, no-one has any influence on the way the trial proceeds, he said, to try and insist on it in any of the documents submitted like I was asking was simply unheard of and would do harm to both him and me. (212)

The novel follows K.’s attempts to understand the complex structure of the court, which consists of various levels, with the higher courts being presented as the most important yet concurrently practically unreachable. Further, interactions with the court are often conducted through a wide and eclectic cadre of representatives and intermediaries—from lawyers of different level, stature, and expertise to less formal though often more influential characters (Munro and Huber 2012).

This is crystallized in Kafka’s conversation with Titorelli, the court’s portrait painter, who is at once a prominent figure and a beggar. Titorelli offers the most comprehensive portrayal of the trial’s possible outcomes: ‘absolute acquittal, apparent acquittal and deferment’ (182). As is often the case, these labels do not reflect their actual meaning. Absolute acquittals, Titorelli suggests, exist only as myths. The second option—apparent acquittal—implies that the defendant is acquitted, yet is in constant danger of being rearrested, only for the trial to start over:

One day no-one expects it some judge or other picks up the documents and looks more closely at them, he notices that this particular case is still active, and orders the defendant’s immediate arrest. I’ve been talking here as if there’s a long delay between apparent acquittal and re-arrest, that is quite possible and I do know of cases like that, but it’s just as likely that the defendant goes home after he’s been acquitted and finds somebody there waiting to re-arrest him. (189)

In contrast to these two types of acquittal, K. is informed that the best way to deal with his trial is through deferment: in such a case, the trial goes on endlessly yet stays in its initial stages.

In contrast with the interpersonal and agonistic relations in Frankenstein, which deterministically stem from the moment of the creature’s creation, The Trial presents a world in which interactions with the court are both essential, yet futile. Thus, paradoxically, the best way to arrive at a definite outcome is to make sure the process never ends. Interactions with the court are necessary and require constant maintenance, yet they cannot be controlled, predicted, or even expected to progress towards a resolution. The novel could be viewed as turning the meaning of a trial upside down—it is not meant to arrive at a verdict or outcome, rather it is the process itself on which one must center. In place of the primordial moment of creation in Frankenstein that sets up an inevitable course of relations, in The Trial, these relations have no clear starting point (an accusation) or end (a verdict); they are continuously ongoing yet never developing.

Control

The court’s ambiguous agency and its relations with humans bring to the forefront questions of control. The Trial offers a depiction of control that relies on a different understanding of meaning-making—shifting from a stable and general model of meaning to an idiosyncratic and personalized one. The Trial’s model of meaning-making is put forward in the famous Before the Law parable told by the priest towards the end of the novel: a man has come to see the law. At the law’s gate, he is denied entrance by a guard, who tells him that he is only the first, and least frightening, of seven guards protecting the law. The man waits at the gate and tries to persuade the guard to let him in, only to repeatedly fail. As he is about to die, he asks the guard one last question—how is it the case that he is the only person who tried to enter? The guard then replies: ‘Nobody else could have got in this way, as this entrance was meant only for you. Now I’ll go and close it.’ (256)

The duality of the law being both personally tailored and inaccessible is emblematic of the dynamics of meaning-making in the novel. K. simultaneously shapes reality and is hopelessly trying to make sense of his predicament. As outlined above, it appears as if external events are shaped by K.’s thoughts. Yet he is also repeatedly told that he should learn to accept rather than control reality, as concisely summarized by the priest: ‘you don’t need to accept everything as true, you only have to accept it as necessary.’ (263) Like the protagonist in the parable, reality is created just for him, yet it remains beyond his reach. In a reversal of everyday reality, where we are limited to determining our own actions, Kafka paints a world in which K. can bend reality according to his whims, but his own inner workings and immediate actions remain beyond his control. Critically, this structure has two layers—like K., the reader is constantly lured to formulate a stable and comprehensive understanding of the text, but in line with the parable, the semblance of meaning only signifies its inaccessibility (Benjamin 1969; Canetti 1974). This is further reflected in the novel’s style, characterized by a juxtaposition of an overarching sense of symbolism with an inordinate amount of minute details (Deleuze and Guattari 1986). Even when the parable is introduced, it is immediately followed by incessant arguments between K. and the priest about details in the story and their precise interpretation.

This model of control becomes clearer when we compare it to the Frankensteinian sociotechnical imaginary. Whereas Frankenstein elicits the question of whether humans can control their creations (and the world in light of the creature’s superiority), The Trial introduces a world in which the very logic of choice and coercion is altered—the two are no longer mutually exclusive or zero-sum entities in the sense that the ability to choose does not inherently lead to less coercion. In fact, K.’s choices just get him closer to the predestined outcome of his death. Thus, K. is required not to shift any specific choice, but rather to reorient how the relations between the two are perceived.

The Trial as a Lens for Making Meaning of Meaning-Making with Generative AI

What is implied by the Kafkaesque sociotechnical imaginary for thinking about GenAI? I contend that the shift between these two sociotechnical imaginaries is broadly analogous to adapting a postdigital lens. While the Frankensteinian sociotechnical imaginary positions AI as separate from humans but conducting according to a parallel logic, The Trial offers a lens that invites us to reframe processes of meaning-making altogether. Though the postdigital is notoriously hard to define (Gourlay 2023), for the context of this paper, I suffice in highlighting the resistance to a dichotomic view of the physical and digital, calling for examining the diverse ways in which the two are entangled (Bhatt 2023; Fawns et al. 2023). This, in turn, entails the rejection of determinist views of technology’s impact and highlights the importance of interrogating the reciprocal interplay of technological and discursive elements (Dishon 2021; Macgilchrist 2021; Mishra and Heath 2024). Critically, a Kafkaesque sociotechnical imaginary goes beyond merely reiterating or vindicating the ideas espoused by a postdigital approach. Gourlay (2023) emphasizes the need to avoid idealizing and reifying entanglements and connections in postdigital research and to concurrently address breakdowns or ephemeral elements in such structures. Analogously, I argue that The Trial offers a sociotechnical imaginary that unearths the breakdowns in dichotomic and stable views of meaning-making, challenging binaries implicit in the three above axes: external-internal, process-outcome, and choice-coercion.

External-Internal: Overcoming the Binary View of Agency

Within this Kafkaesque sociotechnical imaginary, GenAI is not treated as an external agent. Instead, it serves to further blur the distinctions between external and internal facets of agency, making it harder to gauge the demarcation between human and machine intentionality. This blurring of human and AI intentionality is key to understanding GenAI’s role in meaning-making at various levels. First, GenAI’s ‘intelligence’ is based on the data it is initially fed—its outputs are primarily a reflection of the statistical regularities it identifies in human data in the pre-training stage. Yet, this is not a simple mirroring, as GenAI relies on black-boxed processes to generate new and partially unpredictable meanings. Second, as GenAI outputs themselves are used more widely, they will represent a larger part of the data on which newer models are trained, further blurring the distinction between human and machine sources. Finally, researchers have identified recursive processes that illustrate how humans and GenAI are reciprocally entangled. For instance, during fine-tuning, GenAI models are likely to offer answers that are better aligned with user preferences, even if they are less truthful (Sharma et al. 2023). At the same time, humans are likely to modify meaning-making processes in order to maximize GenAI’s capabilities (Mishra and Heath 2024). Thus, much like the court, GenAI does not represent an external agent with a well-defined intentionality. It blurs and complicates the interplay of human and machine agency, undermining the differences between the two, and the notion of internal vs external agency more broadly. Though such assemblages of human and machine intentionality are in no way new (Bhatt 2023; Macgilchrist 2021), GenAI renders them even more complex and more literal due to its capacity to rearrange human-produced texts and generate outputs that are novel in the sense that they cannot be attributed to any specific human. This increases the likeliness of positioning the technology itself as an author, rather than to its interplay with human practices of use and interpretation (Gretzky 2024).

Process-Outcome: Human-AI Relations and the Tendency to Generate and Ascribe Meaning

Rather than portraying AI’s nature as determined by its initial design, the Kafkaesque sociotechnical imaginary highlights the constant negotiation in which GenAI cannot be fully controlled nor can it be avoided. As K. is repeatedly told, he must take an active stance towards his interactions with the court, while acknowledging the nature and limits of his influence. In the novel, the lack of a definite answer does not deter K., or the reader, from searching for meaning. This perpetual process of searching for and constructing meaning is a productive lens through which to conceptualize human relations with GenAI. In contrast to the agonistic relations of two distinct subjects in Frankenstein, The Trial depicts relations that are based on an endless process of humans striving to decipher and shape GenAI’s agency. While existing research has rightfully highlighted how GenAI reproduces and exacerbates biases in its training data (Williamson et al. 2023), it is important to concurrently note that GenAI does not simply reproduce meanings, it also potentially modifies them. Combined with the human tendency to seek and ascribe meaning, this leads to an endless proliferation of meaning. I argue that this production of meaning is a constitutive aspect of GenAI-human relations. Although sometimes compared to a calculator for words, a key difference is that GenAI is not designed to inherently offer the right answer; on the contrary, its overarching logic is to generate content regardless of its veracity or accuracy (Costello 2023; Natale and Depounti 2024). The human search for meaning is thus amplified by GenAI tools’ design to constantly generate outputs. Critically, the quest for meaning does not lead us closer to the truth or to a definite source but rather serves to create more layers of meaning. This layered model of meaning is reflected in the Before the Law parable: though we only interact with the most external guard—i.e., the actual outputs of GenAI—we are led to believe, or perhaps want to believe, that there are other more basic layers which could offer more definite answers. The search for these unreachable answers, the parable suggests, is fundamental to understanding how we make meaning.

Choice-Coercion: Reconceptualizing Control

Although sensational accounts of future dangers or super-intelligent AI have received extensive media coverage, there is a need to address the concrete and immediate ways in which GenAI reorients structures of control, choice, and coercion. The Trial is not about humans losing control over their creations, if they ever had control in the first place. Instead, it foreshadows GenAI’s capacity to generate content that is personalized to every actor (and thus shaped by humans) yet is not amenable to control through explicit choices. This model of meaning-making undermines the dichotomy between choice and coercion, no longer positioning the two as mutually exclusive. In place of the view of control as domination, either of humans by AI or of AI by humans, The Trial explores how the interplay between humans and GenAI creates new structures of choice and coercion. Specifically, GenAI offers humans an unfathomable number of choices—an endless and personalized variety of texts adapted to distinctive styles, aims and contexts. Yet, this variety is based on choices whose details are fundamentally determined by GenAI, and which often include personalization according to what GenAI calculates as users’ individual intentions or preferences (Natale and Depounti 2024). In this respect, while GenAI offers more choices and personalization, it is not clear when and how this supports or impedes human choice. For instance, in contrast to previous writing technologies, which mainly edited human texts, with GenAI humans become the editors of machine-written texts, whose rationale remains black-boxed (Robinson 2023). This supposedly allows humans to write faster and about a more diverse array of issues but could simultaneously coerce certain modes of meaning-making. GenAI’s outputs, which were meant to imitate human writing, could become a model or template that demarcates the possibilities of human meaning-making (Gretzky 2024). Therefore, as in Before the Law, personalization does not inherently entail increased control over meaning-making, but rather its increased mediation according to what GenAI identifies, or perhaps determines, as our personal preferences.

Summary and Afterthoughts

The emergence of GenAI has been portrayed as a potentially pivotal moment for human writing and meaning-making more broadly. To appreciate, and critically engage with, the unique aspects of any technology, we need to go beyond examining its technological features, avoiding the temptation to treat such technologies as natural or inevitable. Instead, we ought to concurrently explore the sociotechnical imaginaries that shape its depiction, use, and regulation. Accordingly, this paper sought to scrutinize the main element of the taken-for-granted Frankensteinian sociotechnical imaginary underpinning responses to (supposedly) intelligent machines. Within this sociotechnical imaginary, AI is approached in anthropomorphic terms, characterized by forms of agency that are mostly human. As a result, relations with AI are conceptualized in interpersonal terms, which commonly lean towards a conflict over dominance. Hence, AI threatens humans’ control over meaning-making without reshaping its overarching logic.

This paper suggested applying an alternative inspiration for our sociotechnical imaginaries of GenAI—Kafka’s The Trial—which has been characterized as engaging with the metaphysical and social challenges of meaning-making in the modern world. Specifically, I argued that The Trial offers a lens that challenges three common dichotomies that could limit our thinking about GenAI’s impact on meaning-making: external-internal, process-outcome, and choice-coercion. First, overcoming the tendency to portray AI as possessing a human-like agency external to humans, and exploring how GenAI and human modes of meaning-making are entangled and recursively shape each other. Second, The Trial goes against the distinction between processes of meaning-making and their outcome, calling attention to how meaning-making stems from the complementary design of GenAI to constantly generate texts, and humans’ tendency to ascribe and generate meaning. Finally, The Trial shifts out view of control from a struggle of domination against AI, to an emphasis om how AI’s personalization of meaning-making increases our choices yet concurrently coerces certain patterns of meaning-making, thus limiting our influence.

By way of conclusion, I want to highlight that the distinction between these two sociotechnical imaginaries is not as sharp as can be deduced from this paper. Notably, the creature in Frankenstein does not inherently seek to dominate humans, it is his sour relationship with his creator that he views as the cause of his conduct. Further, we do not know what happened to the creature at the end of the novel, but his creator’s death seems to reflect his own loss of meaning. Thus, in line with Kafka’s admonition that the ‘world is too small, too straightforward, too truthful, to hold all that is contained in one man,’ both novels invite us to reflect on the meanings we attribute to our internal monsters as key to how we make meaning of our technological creations. Yet, the failure of the search for meaning in both texts—laid out in its utter extremity in the Before the Law parable—reveals that despite its importance, it cannot lead us to some final and well-guarded meaning that is worthy of sitting and waiting for our entire lives.