Introduction: proximate failures

Following Graham Harman’s insight that, when it comes to the social ontology of artifacts, “an object is better known by its proximate failures than by its successes” (Harman, 2016, p. 116), we will approach the modern computer—including its capability as artificial intelligence (AI)—by what it structurally cannot do, in order to discern what it does as a social object. Proximate failures, however, are not simply defined by the long list of things something cannot do but by an internal conflict between the computer’s inherent transcendental form and the reality of its material basis. This essay proposes that these two aspects—that is, what a computer cannot do and what it is supposed to do—are closely intertwined if we approach the computer as a social object. This also means that we will be focusing on the symbiosis (Harman, 2016, p. 117) that turns the modern computer into a social object. This symbiosis constitutes primarily its value as an epistemic tool or, to use a current concept, an enabling technology (Bellomarini et al., 2019). An enabling technology, as Bellomarini et al. propose, is supposed to open new avenues of scientific inquiry. The computer is fundamentally a calculating machine, which means that we must focus on the computer in this aspect, as a specific form of applied mathematics, if we want to approach its specific proximate failure.

Accordingly, we connect the problem of a proximate failure to the idea of a specific way of thinking, a certain mathematization of thought, to understand the structural position of the modern computer. Jacques Lacan offers us an initial direction for this proximate failure. In the early days of computer science, back when it was known as cybernetics, he noted that the computer would have difficulties approaching the symbolic as such, despite being structured by nothing other than the symbolic:

It is not because it lacks the supposed virtue of human consciousness that we refuse to call the machine to which we would attribute such fabulous performances a “thinking machine,” but simply because it would think no more than the ordinary man does, without that making [it] any less prey to the summonses [appels] of the signifier. (Lacan, 1966/2006, p. 45)

This might seem strange to us, as the feat of creating a thinking machine already seems to have been achieved, and the news is currently awash with the uncanny behavior of chatbots. However, as Lacan also indicates in this quote, the problems that we are facing in analyzing the computer are interlocked with the signifier’s effects. The problem at hand is therefore one of the symbolic structures of computation. The symbolic structure of computation should not be simply identified with the formal logic inherent to the machine, but in a stronger sense with the ontological problem of logic as including the “impasse of formalization” (Badiou, 1988/2006, p. 5). This perspective on logic is central to the continental discourse on logic, but largely ignored in the tradition of analytic philosophy. While Lacan pushed this discourse the furthest, Heidegger and Freud both contributed to a much more complex understanding of the foundational problems logic. Our analysis assumes its methodical stance within this approach to logic: in the intersection between the impasse of formalization as the real and the symbolic structures which are equally based on this impasse as they alone also allow access to it. That means we give Harman’s idea of the proximate failure a specific Lacanian twist, by relating it to the problem of the three registers: real, symbolic, imaginary. If we approach modern computation through this logical perspective, we can quickly identify that the computer as such does not have full access to the symbolic, if the symbolic is characterized by the relation to this real impasse.

The discourse on AI today faces a significant challenge: there is a dearth of theoretical research on the misrecognition of the symbolic through computation. Although Lacanian literature incorporates the logical discourse into other research areas, there is a noticeable absence of a discourse on the ontology of logic that pertains to the psychoanalytic discourse on AI. While thinkers like Alain Badiou (1988/2006), Jacques-Alain Miller (1966/1977), Ellie Ragland (2015), Joan Copjec (1994), and Alenka Zupancic (2017) have contributed significantly to this discussion, their focus lies mainly in the clinical context or on political conflict. In recent years Isabell Millar (2021), André Nusselder (2006), Jacob Johansson (2018), and Matthew Flisfeder (2021) have contributed to the discourse on computational problems. However, their approaches take different vectors of questioning that center around the fantasy of AI and computation and apply Lacanian discussions of fantasy to algorithmic intelligence. However, these approaches only scratch the surface of the issue, addressing the phallic appearance of modern computers as Clint Burnham (2022) called it.

Our approach, on the other hand, starts with the logico-algorithmic foundations of computing and its material structure of calculation, analyzing it based on a continental understanding of logic. Although Johansson marks the perverse nature of big data (2018, pp. 141–167) and Flisfeder approaches the problem directly, neither discusses the algorithmic big Other’s inherent proximate failure. Millar, too, only touches on the issue in her discussion of omega numbers (2021, pp. 23–37), and Nusselder reduces it to the interfaces as a fantasy (2006, p. 63). These works, thus, do not provide a foundation for the analysis we present. Furthermore, Rambatan and Johansson (2022) have recently revealed a fundamental “misrecognition” that occurs in big data. Unfortunately, they do not explore this misrecognition regarding its symbolic core. Therefore, the primary objective of this paper is to address this misrecognition and its symbolic core to advance the discourse on AI.

To understand the impasse of formalization in computation today, we must recognize that it is not a universal occurrence but a specific one, shaped by historical factors. The term “historical” here refers not directly to the study of the past, but to the German philosophical tradition, exemplified by thinkers like Martin Heidegger and Hans Blumenberg (2022). According to this tradition, history is defined by a particular ontological difference between being (Sein) as the structure of intelligibility and beings (Seiendes) as that which is made intelligible, similar to the relationship between signifier and signified. Such a historical approach has to account for the impasse of intelligibility, and it is therefore possible to link this idea to Lacanian logic. To explore this approach, we will employ Heidegger’s concept of “modes of being,” which we examine in detail in the first part of this paper.

The proposed idea is that the modern computer is an anchoring object that structurally reinforces a certain mode of being with an inherent compulsion that is not cultural. Heidegger first introduced the concept of computability (Berechenbarkeit, sometimes also called Machenschaft) as a mode of being nearly 80 years ago, when computers were still in their early stages of development. He therefore recognized prior to Cathy O’Neil’s (2016) recent book that they are “weapons of math destruction,” as they would force us to view the world in terms of quantifiable energy and information. He believed that this way of thinking would ultimately lead to the destruction of our planet and our ability to approach reality (Heidegger, 2000, p. 172). In this paper, we will explore the concept of computation as a mode of being, drawing on Heidegger and Lacan’s combined approach to analyze the specific impasse that formalization in modern computation is structured by. Using Miller’s “logic of suture,” we will examine the inherent impasse of formalization in computability and argue that it is inaccessible to the computational machine, which changes the way we should approach this modern mode of being. Lastly, we will discuss how the computer’s misrecognition of the symbolic produces an impasse of formalization that is incalculable.

Modes of being

Let us begin with the question of what a mode of being is. Heidegger assumed that a mode of being essentially operates as a sort of implicit or tacit social knowledge—the “unknown known” in Žižekian terms (Žižek, 2008). This mode of being acts as a determinant of what is intelligible and thus of what is part of being and what is not. These modes of being constitute different transcendental distinctions of what is considered reality and what is unreal or unintelligible. A classic example that Heidegger himself offers is the mode of being of scholasticism. This mode of being is centered not on calculability but on “createdness,” meaning that the intelligibility of something is determined by its relation to its creation by a divine artisan. This idea fully realized means that it is not the divine creator who organizes this distinction but that the idea of the divine creator is derived from the central idea of this mode of being:

It is evident that God’s existence is not so much the source from which the being of the ens creatum is determined, but vice versa. The being of God Himself is determined on the basis of a definite preconception of created being. (Heidegger, 1994/2005, p. 142)

In the case of createdness and other modes of being, this is a form of sublimation of a human practice. This means that the structure of artisanal creation is elevated to a master signifier that organizes the symbolic order to make things intelligible. Modes of being in this sense then are a central distinction that dominate and organize the distinction between the intelligible and the unintelligible. Notably, the unintelligible cannot be expressed without stark difficulties, as we can see in the tradition of negative theology, which revolves around the unintelligibility of the excluded, that is, the uncreated divine. Central to this notion of a certain mode of being is that it is a structure imposed on intelligibility and that it is not an exterior principle to the field that it organizes but rather an elevated element of it, not unlike Kuhn’s paradigms.

To translate this into psychoanalytic terms, modes of being are logical distinctions structured like the paternal signifier, but with a globalized scope. Of course, this globalization was originally simply a cultural distinction, but this shifts in computerization as we discuss below. This organization has a curious temporal aspect; central to all modes of being is the retroactive effect they have. Heidegger noted early on that the Entwurf, that is, the existential projection that constitutes our genuine relation to being, retroactively acts upon our awareness of the past. The same can be said about the modes of being that, while being historical transcendental horizons, do not reflect upon this historicity but instead retroactively project their mode of being as something that has always already been there (Heidegger, 1967/1999, pp. 129–130). For example, scholasticism understood antiquity within the transcendental horizon that scholasticism offered and not from within antiquity’s own mode of being.

Heidegger assumes that the modern mode of being is structured by computability (Berechenbarkeit), which means a universal quantification of things into energy to create more energy. It is a representative structure that is focused by a gigantic “anticipating, planning, organizing grasp of everything, before everything is already grasped in particulars and individuals, this representation finds no limits in what is given and seeks to find none” (Heidegger, 1989/2012, p. 107). Just as the scholastic mode of being operated on the basis of a fundamental division (createdness and non-createdness), computability also operates on the basis of a logical operation. However, this operation is “its divisibility into parts which remain the same as it in kind” (Heidegger, 1989/2012, p. 108). This means that computability, as Heidegger conceives it, introduces a totalization of explicit and explicable data. There is a clear parallel to what Lacan called the capitalist discourse, especially as calculability “never knows overabundance (what is in-exhaustibly unexhausted)” (Heidegger, 1989/2012, p. 108), thus mirroring the “rejection of symbolic castration” (Vanheule, 2016, p. 7) that marks capitalist discourse. These parallels become apparent especially in the relation of this discourse to the unconscious and its repression, as the capitalist’s discourse by rejecting castration also excludes the real. However, computability today is heavily bound to artifacts. To understand what this means and to demarcate the difference to the capitalist discourse, we need to understand how computability was structured before the widespread rise of modern computers in the final decades of the previous century.

The foundational abyss

In Heidegger’s time and until the middle of last century, thinking with mathematics still primarily meant using mathematics as a way of writing, be it on blackboards or paper. The practice of calculation was limited only by the symbolic structure of language and writing. Reflections on abductive logic that moved beyond the mainstream interpretations of the Frege–Russell tradition, as found in C. S. Pierce’s work (Hoffmann, 1999), or Spencer-Brown’s foundational voids (1979, p. 101) required the thinker to transgress a specific practice of writing and assume it to be fundamentally faulty or at least limited. A not-all was implicitly included in this practice. Such transgressions made it possible to reformulate the symbolic and create new formalizations. These transgressions do not need to move beyond the practice of writing that mathematics represents (Wittgenstein, 1976, p. 38). Instead, they enrich this practice by adding new elements and problems to it. This transgressive aspect as such is essential to scientific writing and mathematization. Even Gottlob Frege, one of the founding figures of modern logic, employed this transgression as a foundational element. The transgression in his work might have been repressed in the analytic philosophical tradition but was influential nonetheless. In formal terms, Lacanian psychoanalysis is well acquainted with the problems that this poses. This transgressive structure is the metonymy of the signifier that produces the unconscious as a negative virtual addition to the defined field that a signifier marks.

The formal structure of this transgression can be detailed in the logical structure of Frege’s Die Grundlagen der Arithmetik (1884), which Jacques-Alain Miller discussed in his work “Suture (Elements of the Logic of the Signifier)” (1966/1977). Let us briefly outline Miller and Frege’s argument to indicate the problem that we are faced with. It concerns the metaphoric element of Frege’s reasoning, which was not criticized by Russell and is external to the later critique of Frege’s Grundlagen. The reason for this transgressional element being exterior to the later critique is simple. The argument that we will focus on is structurally comparable to the deduction of transfinite numbers by Georg Cantor that Alain Badiou introduced to philosophy in his Being and Event (Badiou, 1988/2006). However, the core argumentation probably dates back at least to Immanuel Kant’s noumena and his infinite judgment. It is therefore neither new nor very radical in its thinking. Still, Frege’s argumentation offers a good example of this transgression as it demonstrates with elegant simplicity, the problem before us.

Frege operates with three central notions: the concept (Begriff), the object (Gegenstand) and the number (Anzahl). These are clearly defined by their relationship to each other. An object is that which is signified by a concept but is not a concept in itself. Instead, objects are determined by names as singular and clearly defined entities (Frege, 1892/2008, p. 49). A concept, in turn, is defined by being a predicative proposition about objects that fall under it but is in itself not an object (Frege, 1892/2008, p. 48). In contrast to these two notions, a number is more complex; it is a property (Eigenschaft) of concepts (Frege, 1884, p. 64). Frege arrives at this notion of numbers by starting with the random concept of F and creates the concept “equal in extension of F.” The extension of these two concepts (that is, the objects that fall under them) is identical and is defined by the logical relation φ. This logical relation φ, which applies to both concepts, is the number. However, this number is not in itself a concept but an object (Frege, 1884, p. 67). Accordingly, “one” is the name of the specific number that is a property of all concepts that refer to a single object. Therefore, there are not two “ones” but only a single “one” that is a property of all concepts that designate a single object.

How does Frege extrapolate the series of numbers? If we think of the concept “identical with 0 but not identical with 0,” no object falls under it. Why is this the case? Because Frege conceptualizes identity by drawing on Leibniz: “Eadem sunt, quorum unam potest substitui alteri salva veritate” (They are the same, one of which can be substituted for the other without loss of truth). Therefore, the notion of number is only applicable in one way, in its absence, because nothing falls under “identical with 0 but not identical with 0.” Hence, “zero” is simply the name of a void or absence, not even a real object. Drawing on this strange non-object named zero that falls under the concept “identical with 0,” Frege can now produce the number “one,” which is applicable to the concept “identical with 0.” This is a very brief description of how Frege constructs the series of numbers.

Miller (1966/1977, p. 29) argues that the contradiction of being “identical with 0 but not identical with 0” is the key to understanding how the number zero is the necessary starting point of the series of numbers. This is achieved through a topology of truth, where the principle of identity is used in an inverted Kantian infinite judgment. This topology indicates only a space of rationality and immediately moves beyond it by marking the space beyond as non-identical. However, this non-identicality does not reveal much about this space, not even if we should assume it as infinite or not, except that it is not within the topological space of Leibniz’s principle. This problem is similar to that discussed by Heidegger in “What is Metaphysics?” (1929/2007), where any determined universality always produces a necessary indeterminate excess. This indeterminate space should not be seen as inconsistent or determined by the rules of consistency that the principle calls for, as could be suggested by assuming with Graham Priest (2006) that the space generated here is paraconsistent. Instead, as Quentin Meillassoux argued, the foundational and absolute “chaos” that emerges as the basis of determination cannot be addressed as inconsistent (2006/2008, pp. 78–80). The negation that constitutes the non-identity of this exterior therefore should not be read as a privation, that is, dependent on the determinate space of identity, but as an infinite judgment. It is not a simple not-identical space, but the not-all of the seemingly universal principle.

The principle that Frege takes from Leibniz acts then as a limit to reasoning and is only applicable from zero onwards. However, he must move beyond this identity of reason. Only by giving a name to this indeterminate space outside of the rational frame that Leibniz assumes can Frege arrive at the series of numbers, especially since every other number will be constituted by an oscillatory interaction between this non-identical void and the names created from it by repeatedly counting it (Miller, 1966/1977, pp. 30–31). In short, Miller’s argument is that the contradiction of being “identical with 0 but not identical with 0” is necessary to understand how the number zero is the starting point of the series of numbers. This requires moving beyond the principle of identity and recognizing an indeterminate space outside of rationality.

This transgressive element in mathematics allows us to limit the idea of a mode of being that Heidegger assumed to be universal, thus contradicting Heidegger’s own assumptions. If calculations positively rely on something that is not that “which remain[s] the same as it in kind” (Heidegger, 1989/2012, p. 108), as objects of mathematics they do inherently relate to the impasse of formalization. However, what Heidegger proposed as the mode of being as computability emerged alongside modern science in the eighteenth and nineteenth century still oriented its idea of computability on a certain praxis of calculation that allowed and required, as shown above, a certain deviation from this praxis. This deviation was still included within the conceptual field of intelligibility that is structured by the cultural mode of being of calculability. Notably, a whole range of mathematical and philosophical theories acknowledge the extension of this structural distinction that originated in the Kantian distinction between noumena and phenomena (Kant, 1790/2007, B310–A255). It is here where Kant also uses the noumena as a negative and indeterminate but absolutely necessary demarcation of sensory perceptions that in themselves cannot be called sensory.

Structure instead of culture

Frege’s use of one and zero radically differs from the use of one and zero in modern computing, since the register-based method of calculation does not allow us to apply such infinite judgment as a basis for counting. Modern computers do not permit us to move beyond determined registers of numbers, which in turn means that they are incapable of transgressing beyond the specific practice of writing for which they are built. Theoretically they operate on the assumption of starting with a whole, which in the case of the computer is the “machine” that cybernetics introduced. In system theory, information theory, and cybernetics and its successors this focus on “wholes” as systems is central (Bertalanffy, 1969, p. 5; Ashby, 1956, p. 3). We also find this focus on systems as an assumed “one” in a more recent AI theory shift (Agre, 1997; Freeman, 2000, pp. 28–30). By assuming a “whole” or “system” as the starting point of theory, the void, as a fundamental condition, is excluded.

The name of the void “zero” is therefore not the same concept as the zero level of change or the neutral position that the zero of measuring indicates in physical computational systems. Both concepts of zero mark either the absence of a change or of a given variable. And both concepts of zero are not applicable to Frege’s argumentation but indeed presuppose it. Neither the non-change of a system nor the neutral position allows us to construct a space/void that is non-identical. The only way zero would be applicable to this non-identity is in the form of the empty set, but only in the specific interpretation that Badiou offers for zero, which means to utilize it as an indeterminate void (Badiou, 1988/2006, pp. 66–69), but not as an enclosing or mystical unity. The problem also haunts cybernetics, for example in the identification of the necessary void with an encompassing whole in second-order cybernetics; compare the collection by Kauffman and Brier (2001), which inexplicably turns to mysticism instead of approaching the real. Other uses of zero/absence/the empty set do not mark the absence of identity but rely on a zero level that has already been marked, that is, attained. Of course, now one might argue in line with Ricardo L. Nirenberg and David Nirenberg’s critique of Badiou:

The important thing is that we know how to go from any stage to the next and that we can do so forever, and in that sense the usual sequence 0, 1, 2, 3, . . . will do nicely enough. (Nirenberg & Nirenberg, 2011, p. 594)

However, the structure of thought that Frege, Cantor and Miller, as well as Badiou, employ relies on an interpretation of the empty set that is based on the void. We need to name the void to reach a consistent concept of counting that is not identical with the foundational void employed to reach this point. So, the sequence will do nicely on the very condition that we can reach the zero-level starting from less than the zero level (Žižek, 2012, p. 585). Central to this argumentation is that it is not a problem of complexity. Computers are not able to articulate this problem because they lack complexity but because they start at a level of complexity that is already too high.

Now, if we listen to the popular philosophers or billionaire gurus of Silicon Valley, we might assume AI and computing to be well adapted to nearly any problem. But, if we look at actual studies, and consider those actually working on AI rather than selling it, another picture emerges. Despite extensive scientific research, current AI models have not been able to identify any strong structural patterns within social realities, which is a significant issue considering the growing importance of AI in various fields. Numerous studies, including those conducted by Dressel and Farid (2018), Littlefield et al. (2021), and Salganik et al. (2020), have highlighted the limited predictive power of current AI models with respect to social data. Salganik et al.’s study is particularly noteworthy in this regard because of its considerable scope. The fact that AI has failed to analyze the social data effectively should be a major concern, as it might highlight an inherent limitation of the digitalization of social data. Now the common response to this is to assume a lack of data or a lacking model; however, if we assume that computability misses a major element of the logico-mathematical reasoning that continental philosophers and psychoanalysts mark to be relevant in the structure of social interactions, it is no surprise that AI fails in a field where its foundational logic is lacking.

Revisiting Heidegger’s concern about calculation as the modern mode of being, we understand it wasn’t initially conceived with materialization in mind. Materialization introduces a unique aspect, allowing it to function as Heidegger anticipated and permeate global society. This refines Heidegger’s notion of modes of being. However, the materialized mode of being as computation is not a cultural identity or a Welt but holds a structural character. This material element is what Vivek Chibber describes in political terms as a “structural location” that determines and organizes a cultural “meaning orientation” (2022, p. 34), the important addition here being that there is a material object that the world as meaning orientation can be anchored on and that structures meaning. Chibber differentiates between meaning orientations and structure by comparing the effect they have. Meaning orientations might play a central role in our lives and the actions we take, but they merely coerce at best. They can create threatening illusions, but we can give them up. However, structure compels us to act in a certain way. We can depict this by visualizing the absence of coercion that a blackboard exerts upon our mathematical reasoning. We can quite easily move beyond the limitations of the blackboard and extract a metaphorical reasoning from our calculations as Frege does. However, calculating by using a computer presents us with hard limitations that make whatever lies beyond its limits appear to be non-existent.

The materialization of computerization also changes the force of its structural compulsion. In stark contrast to the compulsion that computability as a cultural mode of being exerted, which was linked to the influence of the European scientific sphere of influence (including its former colonies), computerization now exerts a global and culturally neutral compulsion. Logic and computers are seemingly culturally neutral. In this sense, computerization shares a central characteristic with capitalism as Chibber describes it. In contrast to capitalism, which according to Chibber exerts its compulsory force at the level of material survival (2022, p. 108), computerization compels us at the level of epistemology. We cannot but act in a certain way. This compulsion is more potent if we assume that AI will bring a sort of end to theory (Anderson, 2008). Although Chris Anderson’s claim is exaggerated and misunderstands the concept of theory, it still highlights a crucial aspect of AI and computerization in philosophical terms. AI functions as an enabling technology that not only organizes and structures thoughts but also serves as a transcendental structure of knowledge, imposing digitizable data specifications onto information. Despite appearing culturally neutral, it is deeply theoretically infused as discussed, reflecting a particular way of understanding the world excluding the Lacanian real. If we use it without being aware of this limitation, we are indeed approaching an end of theory. However, we argue that the basis of this effect is to be found not in a certain cultural trust but in the materialization of computability as an epistemic structure. If we assume that today’s machines are part of the big Other’s reflection that constitutes our subject position, this inability of the machine is not just an epistemic problem in a formal sense but a concrete problem of modern subjectivation.

Now, the modes of being as Heidegger conceptualized them were never formulated with a primary concern for manifestation. On the other hand, Freudian and Lacanian theory offers the “subject of the unconscious” that “gears into the body” (Lacan, 1990, p. 37), and we propose that any computer and every instance of AI today has a computational influence on the unconscious. This influence is not just based in its history of the exclusion of the void that acts as the organizing principle of its structural compulsion, but also in the techno-material enforcement of it. The computer, therefore, not just coerces us to exclude the real, as long as we utilize it to approach the world it forces us to exclude it. This means that with computation we see a shift from a sociolinguistic base to a techno-material dimension of modes of being. This computational effect on the unconscious, a reformulation of the algorithmic unconscious that Luca M. Possati (2020) proposed, allows us to broaden the unconscious structure of AI, that is, its inherent materialized assumptions about being or “unknown knowns” in Slavoj Žižek’s (2008) semiotic fourfold of knowledge, to computation and not just to the algorithmic structure or the specifics of modern AI. If we approach this unconscious under the header of computerization as another artificial curtailment of computability, the distinction between the intelligible and the non-intelligible is much more radical. This exclusion is not repressed or foreclosed, but fundamentally excluded without being able to enter the field of intelligibility except by a violent intrusion.

Now, we can delimitate the difference between the capitalist discourse and computation. We have recently argued how modern AI acting in social media produces a self-relation that approaches the formal structure of the perverted position and even exaggerates it, since the computer cannot operate with anything close to the formal problem the objet petit a indicates (Heimann & Hübener, 2023). This also means that the subject constitution mirrors the product of capitalist discourse, but for different reasons. In the capitalist discourse we can identify a “brutal immediacy” because its core structure is marked by the denial of the master’s castration (Žižek, 2016, pp. 496–497). By relegating the position of the objet petit a to a central but disavowed position, this impasse of formalization that the objet petit a marks still operates centrally within capitalist discourse. However, if we assume that the disavowal is maintained not by discourse but by the material structure of calculation, we approach a different problem.

Outside context problems

Can we give a less abstract example of the violent intrusion that this denial of the unconscious produces if it is assumed to be universal? Heidegger considers the central element of modern computability to be a kind of planning that relies on “divisibility into parts which remain the same as it in kind” (Heidegger, 1989/2012, p. 108). The concept of planning and organization is today mostly encapsulated under the umbrella of mathematized strategy. Strategy, in this context, is understood as relational and contextual planning, which becomes evident when we examine game theory. As a subfield of applied mathematics, game theory explores decision-making processes in scenarios where multiple individuals or groups have conflicting interests, and one person’s actions are contingent upon those of others. This analysis involves examining various strategies and potential outcomes within these “games,” with each player aiming to optimize their own benefits. Similar to the results of Dressel and Farid, Littlefield et al., and Salganik et al. in AI studies, game theory has also faced criticism for its limited practicality in social frameworks. In a 1994 meta-study, Donald Green and Ian Shapiro remarked: “What have we learned about politics? We directed our attention toward empirical rational choice literatures and eventually arrived at the conclusion that very little has been learned” (1994, p. x). This limited practicality can be attributed to the same ontological limitations. However, in contrast to the computer, it has been applied to social problems for a longer time.

Game theory operates fully within the modern mode of being as calculability. Taking a look at its inception, one quickly notes that game theory assumes that the “rules of the game” can be modeled on “physical laws which give the factual background of the economic activities under consideration” (Neumann & Morgenstern, 1953, p. 32). However, understood as physical laws, these laws of game theory conflate the effect of the law and the realization of the law, because they assume that, like physical laws, rules simply detail a specific necessary action instead of carrying with it a virtual excess as social rules do. This turns the symbolic into a purely positive structure exactly as the machine does. This is a problem that Joan Copjec indicated in regard to Michel Foucault:

For Foucault successfully demonstrates that the conception of the symbolic on which he … relies makes the imaginary unnecessary. … He rethinks symbolic law as the purely positive production, rather than repression, of reality and its desires. (Copjec, 1994, pp. 23–24)

For Copjec (1994, p. 14), this conflation determines and deflates desire, thus abolishing the metonymic, transfinite dimension of the symbolic that Lacan and Freud introduced. Game theory as an example of the pre-Lacanian idea of the mathematization of the social therefore assumes the same ontological completeness that the computer does. Despite not explicitly assuming a metaphysical absolute, equating the rules of the game with physical laws in the Newtonian sense (compare Neumann & Morgenstern, 1953, pp. 4, 6, 14, 32) leaves no space for the indeterminate that enters through the symbolic. Which means in turn, that without any explicit reference to theology, these systems still assume an ontotheological form. This implies that the axiomatic foundations, which are necessary to assume that game theory is applicable to strategic problems, coerce us to think an imaginarized symbolic—that is, a symbolic register that has no transgressive element. But game theory is just that, a coercive theory. The epistemic topology utilized here is that of a closed circle that might differentiate between known unknowns and known knowns, as in Žižek’s (2008) theory of knowledge, but excludes everything that cannot be made to “remain the same as it in kind.” What this excludes is, to misquote Friedrich W. J. Schelling, “that which in the law itself, is not [the law] itself” (compare Schelling, 1950, p. 27).

To make it more explicit, what happens to computability as planning when faced with a problem that originates outside of the materially enforced topology of intelligibility that is assumed in the calculability of a situation? It creates the possibility of an unsolvable “outside context problem” as a proximate failure of calculability. This idea of an outside context problem has been formulated by Iain M. Banks:

The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you’d tamed the land, invented the wheel or writing or whatever, the neighbours were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass ... when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you’ve just been discovered, you’re all subjects of the Emperor now, he’s keen on presents called tax and these bright-eyed holy men would like a word with your priests. (Banks, 1998, pp. 78–79)

These problems are unsolvable if we assume that planning must be based on computability alone. The outside context problem clearly invalidates the inner cohesion of calculability and planning, as the problem it presents appears from outside of the division into “parts which remain the same as it in kind.” Why is this so? Because of the depicted situation of “near-absolute power and control,” which falsely absolves the assumed perspective from the possibility of strong external influences. However, if we follow Banks’s description, we must assume that something like this will happen with incredible infrequency. Banks himself portrays outside context problems to be somewhat singular:

An Outside Context Problem was the sort of thing most civilisations encountered just once, and which they tended to encounter rather in the same way a sentence encountered a full stop. (Banks, 1998, p. 78)

However, considered within the context of Lacanian psychoanalysis, there are good reasons to assume that such problems emerging from the inherent non-all of a situation’s count-as-one are not rare at all. The law and the field opened by the law are different in extension. The latter includes the indeterminate dimensions of the unconscious which produces that, which in the law itself is not the law itself, but which is equiprimordially with the law.

The machine’s problem is that it has no access to this dimension; its unconscious structure is that of a radical abolishment of the unconscious. The unconscious is not just repressed or foreclosed, but radically impossible. Lacan argued in his early seminars, which also discussed the early cyberneticists’ idea of the thinking machine, that the difference between the symbolic in machines and in humans is in the latter’s capability for Verdrängung and Verwerfung:

With a machine, whatever doesn’t come on time simply falls by the wayside and makes no claims on anything. This is not true for man, the scansion is alive, and whatever doesn’t come on time remains in suspense. That is what is involved in repression. (Lacan, 1978/1991, pp. 308–309)

This distinction that Lacan makes to explain the problems of repression also offer a way to read different interactions that outside context problems will create within the Lacanian logic of subjectivation and the modern computer’s logic. Outside context problems are a permanent problem for the subject barred by the signifier, as its ego position never allows it to assume an uncastrated position of “near-absolute power” over its own inner coherence. The computer on the other side is structurally forced into “dispelling all the ambiguity of language,” as ambiguity, as the metonymy of the signifier, is exactly what gets lost here. Hence, the machine is faced with the demand of acting “directly as the instrument of the big Other's will” (Žižek, 2006, p. 127). This means that this position of the computer, which it mirrors to us as an Other in various situations, is that of a structurally enforced exclusion of the lack in the Other.

Conclusion

The computer, and AI as we see it today in particular, should be classified in social terms as a specific structural incision. The defining aspects of these artifacts is the computational barring of the unconscious that is constituted by the specific tradition of computation that brought forth material computation together with the operation upon which a physical calculator is built. These two aspects together create an anchoring artifact that grounds a form of what Heidegger called the modern mode of being as computability. To demarcate a specific philosophical problem that appears here, let us take what Quentin Meillassoux, a key representative of the new materialists, assumes about the accessibility of the thing-in-itself:

All those aspects of the object that can give rise to a mathematical thought (to a formula or to digitalization [emphasis added]) rather than to a perception or sensation can be meaningfully turned into properties of the thing not only as it is with me, but also as it is without me. (Meillassoux, 2006/2008, p. 3)

If the digitizability of an object (not just its mathematization) were central to us accessing the-thing-in-itself, the problem of transcendentality would not be solved but rather doubled by the limits of material calculation. This is because, despite its ubiquitous use in applied mathematics, the digital not only cannot reach the derived absolute that Meillassoux assumes, it even bars full access to it. However, this is not necessarily true regarding formalization, but we need to approach it in the Lacanian field. This would mean that the impasse of formalization (Badiou, 1988/2006, p. 5) and not the explicability of knowledge through digital means are important for this access to the absolute. We therefore need to include the transgression (for example, in the objet petit a) into the praxis of writing. What the modern materialization of computation then shows is the distinction between formalization and digitalization, where the former is capable of addressing the real while the latter forces us to repress it.

This is especially important, as Meillassoux has shown that it is this void that enables us to link science to an absolute foundation. Epistemically it means that, in contrast to mathematics, computer science as yet has no access to this absolute foundation and resides within a transcendental frame closely related to fideism (compare chapter 2 of Meillassoux, 2006/2008). Viewed from these limits of its inherent possibilities, AI and modern computers appear to us quite differently than the common science fiction narratives would have it. They appear to us not only as epistemic tools (which they without any doubt are) but also as social objects that act upon us and thus uphold a certain ontology, insofar as we understand ontology in the Heideggerian sense as the distinction between sense and non-sense or intelligible reality and the unintelligible virtuality. The role of algorithms and AI as a big Other, as it is widely discussed by Lacanians, should therefore reflect this misrecognition of the symbolic that is fundamental to computation. It should also be noted that this in no way implies a Luddite judgment of AI or modern computing, but that these technologies as such, their essential characteristics, need to be the focus of much more rigorous research beyond their technical realizations. It is necessary to apply such rigor not only to their immediate applications but also to their foundations and symbioses within the social space.