1 Introduction

One cannot conceive of an environment without that which surrounds it. Both sides are relationally entangled. In this regard, the process of adaptation to a constantly changing dynamic environment is one of the central challenges of robotic AI. In recent years, adaptive autonomous technologies such as robots, drones, vessels, rovers, or cars have made enormous advances in terms of sensing, interacting and finally also deciding based on environmental relations. The technological capabilities of sensors, algorithms, and machine learning are becoming more and more fine grained with every development cycle, enabling more or less reliable, quick, and safe adaptive behavior in unpredictable environments. Particularly, self-driving cars, or rather the Advanced Driver Assistance Systems (ADAS) that are installed in all new cars, are at the center of public interest, since they are an element of the current transformations of traffic and have the potential of fundamentally transforming the way we move.

These technologies are part of what Katherine Hayles has called a "movement of computation out of the box and into the environment" (2009: 48). When digital, smart, and spatially distributed technologies with advanced sensorial capabilities—which are today often discussed under the rubric of the Internet of Things or of ubiquitous computing, or encountered in the form of robots, drones, and autonomous cars—capture surrounding spaces and interact with other objects, the time-critical processing of environmental relations becomes a central technical challenge. This is especially important in street traffic, where countless human and nonhuman actors encounter each other and must interact at high speed. Automating traffic places the highest level of demands on the time-critical processing of data, AI and the corresponding synchronization of movements in space. ADAS, as installed in all new cars in different degrees of automation, must be able to cope with the unpredictability of the environment and react to potential events within the shortest possible time intervals. In traffic, time-critical decisions can be a matter of life and death.

Such adaptive technologies have been described as autonomous in the sense that they are able to interact with their surroundings. They cannot be understood apart from their environments since these surroundings are not passive conditions of their behavior. Rather, the operation of these systems is based upon a technological ability to pervade relations with the environment—a process of entanglement in which, for the lack of a better word, that which is surrounded and that which surrounds co-exist and mutually transform each other. Autonomy here means that, in the sense of Edgar Morin's formulation of the "ecological relation" (1992), a system is tied to its surroundings in order to act autonomously. Because it is independent, it depends upon its environment, and only because it depends on it, it can become independent. Autonomy is not meant in this case to be understood in the Kantian sense of the self-determination of goals and intentions, but rather, as I will explain later, as an effect of the complex interaction of human and nonhuman components on both sides of the environment/system divide. These technologies possess what philosopher Christoph Hubig (2015) calls operational and strategic autonomy: they are able to choose different means to reach a specific aim, but cannot determine their own goals. Obviously, they do not achieve moral autonomy. When I speak about autonomous cars, I do not intend to assign them human-like autonomy but rather the capacity to adapt to the environment by means of what I suggest we call microdecisions. This heuristic concept is supposed to be helpful for describing the car, its technical systems, and its relation to the environment as a complex time-critical assemblage that allows the distributed robotic system to adapt its behavior and become autonomous in this operational and strategic sense.

Using their sensors and respective filter algorithms, autonomous systems must be able to register events and objects in their proximity, locate and situate themselves in relation to them, project their own behavior into the future, build world models and on this basis decide upon actions, reactions, and interactions. This development raises questions about how such technologies manage both the unpredictability of the environment and the complexities of interaction with human and nonhuman entities. It consequently also raises questions about the conjunction of environmental sensors, data analysis, and the procedures of decision-making that are required to operate under conditions of uncertainty.

In reference to these technologies, autonomy is understood here as the ability of the car (or the drone or the robot) to solve tasks and to choose between options to fulfil specific (and externally given) goals. From a media-theoretical perspective, the question arises as to how these options and alternatives of possible behavior can be technically generated. They are never given, but rather depend on the calculation of virtual world or scene models based on algorithmically filtered sensor data. These models consist of nothing other than probabilities. They are no representations of the world, but more or less fine-grained resolutions of the probabilities of environmental factors that are relevant for the vehicle’s actions. Only in reference to these probabilities it makes sense to speak of decisions. Decisions require the availability of options—what I am suggesting we call alternativity. As it will turn out, the generation of alternativity is strictly bound to the environmental relations of the car and the processes of world modeling.

The challenge in conceptualizing the autonomy of these technologies lies in understanding their capability for decision-making and, thus, its operationalization. In the following, I will introduce the term microdecisions as a heuristic instrument.Footnote 1 Microdecisions are not strictly defined protocological procedures. They do not designate a specific method. Rather, the term allows to conceptualize the nontechnical dimension of these technologies in terms of their technological procedures. It refers to a certain perspectivization rather than a concrete object. But microdecisions are also not simply an observer's construct. Rather, they are an effect of a constellation of microtemporalities, algorithmic categorizations, and data analysis that has become prevalent in a range of different contexts. They are no algorithms, but an effect of their implementation. For this reason, the heuristic of microdecisions can only be pursued with a narrow focus on specific technologies. There can be no general microdecisions but, rather, only technological conditions under which they become possible. In the examples presented here, these conditions consist in the need to decide in extremely short time spans, the microtemporal processing of data and the givenness of alternativity.

As an alternative category of analysis, the term microdecision illuminates a connection that would remain invisible if one were to use only technical or mathematical vocabulary. Although conceptual change of the language of description will not change the contours of the subject, it will allow us to ask previously unenvisaged questions. Microdecisions decide upon options for possible solutions and potential worlds. They become effective because they circumvent human restraints and can only be processed in large numbers on a microtemporal scale. Their scope becomes particularly evident in the context of technologies whose algorithmic capacities are used not only for static calculations of solutions, but also for time-critical localization, orientation, and movement of technical objects within space.

By focusing on how adaptive technologies model their environment by employing processes of Simultaneous Location and Mapping (SLAM), I will argue that microdecisions—that means the sheer number and speed of operations based on algorithmic processes—constitute the operationality of technological adaptation to constantly changing environmental conditions. As I will show, the microdecisions that underpin autonomous technologies are both an element and an effect of the algorithmo-sensorial virtualization of an environment into probabilistic models. In other words, the technological power to choose between options of behavior is based on an alternativity of possibilities. The technologies of creating this alternativity turn out as in the same step enabling decidability and finally what Paul Bourgine and Francisco Varela (1992) have called world-making as a basis for the autonomy of adaptive technologies.

This conceptual framework makes clear that the way in which autonomous adaptive technologies interact with their environments necessarily entails political questions about sovereignty and the autonomy of the automaton. The politics—and biopolitics—of autonomous technologies consist in keeping alternatives open and not being determined in advance. The temporality of microdecisions is decisive and critical—in the sense of crisis, as moment of decision, from Greek krinein for deciding—and thus political. Microdecisions constitute a new mode of power and decide upon possibilities that determine life in digital cultures: who can move where and who cannot, who is connected and who is not, who survives an accident and who does not. The politics of microdecisions thus inevitably raise the question of sovereignty—as the power to decide—in digital cultures. While the political constraints of algorithms, as they have been described in the last years (for example, Gillespie 2014; Pasquinelli 2019), certainly entail a form of power, the temporality and medium specificity of this power remain hidden if we only take algorithms into account. Algorithms are temporally unspecific—they can be processed at any speed and in any medium with the same results. The heuristics of microdecisions allows for a focus on specificities which undergrid the new forms of power, sovereignty and autonomy.

In contrast to human decisions, microdecisions are not tied to a singular adjudicating consciousness but, rather, consist in the capacity of the distributed agency of what Katherine Hayles has called "cognitive assemblages" (2016). An autonomous car does not have a central unit equal to human consciousness but is a composite of different distributed entities—including humans—whose time-critical interaction enables agency. Instead of understanding the car as an isolated entity, recent approaches in science and technology studies and media theory conceptualize it as an assemblage of algorithms, mechanics, CPUs, other drivers, roads, cars, infrastructures, regulations, human and nonhuman bodies, etc. (Weber and Suchman 2016). It is in this capacity to solve cognitive tasks that autonomous cars, human actors, and their environments constitute a "cognitive assemblage" that can achieve the capacity to decide. And, to again use Hayles’s language, in this dynamic unfolding of human and nonhuman, material and biological actors into distributed agency “the spectrum of decision-makers” (Hayles 2016: 115) is widely expanded.

The machines that carry out microdecisions are of course produced and managed by humans, who also program the machine’s algorithms. The specifications according to which decisions are made are necessarily established in protracted institutional, legal, and political negotiations. Microdecisions are thus always complementary to institutional and collective macrodecisions about appropriate behavior and mesodecisions of individuals in specific situations—be it on the mesoscale of a driver or on the macroscale of programming, jurisdiction and determining what becomes part of a world model and what does not.Footnote 2 These rules can then be applied to new situations depending upon the pre-given goals defined beforehand in macrodecisions—for example, that the safety of pedestrians is more valuable than the safety of passengers. Microdecisions are necessarily calibrated for and judged on what happens on the road. But they are located at a different operational level from either political intervention or technical implementation. They are effective for two reasons: first, precisely because they bypass the time-consuming act of human decision-making. Second, they are effective because they are part of a technological assemblage in which agency is re-distributed according to a new structure of decidability.

Microdecisions depend upon assessments, evaluations and data collections prior to and in advance of decisions, but only macrodecisions can be further incorporated in future decision-making. Nonetheless, microtemporal acts of decision-making entail their own sort of politics and are not congruent with the public commissions that define protocols or the legal framework of algorithms. A contemporary analysis of the power of microdecisions in digital cultures should therefore concentrate on technical infrastructures and their re-configurations of sociality and agency, which means not differentiating between human and technical actors.

In the following, I would like to demonstrate that the act of deciding between options—even if it is based on algorithms—cannot be explained by making exclusive appeal to algorithms. The operational function of alternativity requires a different approach—one that explores algorithms, but does not understand them as explanations in themselves. Rather, what needs to be explained is how algorithms achieve the power to decide and why decisions are always more than algorithmic. Hence, when I apply the concept of decision, one traditionally associated with an intentionality, to microtemporal processes below the threshold of consciousness, I do so to explain the necessary openness—in the sense of having two or more options—of autonomous behavior (in the tactical and operational sense). It is also important that these decisions can always be made differently and are therefore not immutable. The potential of a politics of microdecisions lies precisely in this difference. To become decisions, microdecisions need alternatives. The givenness of alternativity is a necessary effect of the probability calculus of technologies of what is called world modeling in robotics.

The following chapters each deal with one of the leading concepts for my analysis—microtemporality, probability, and virtuality/alternativity—by considering three examples: a car crash on a highway in the Netherlands, the first self-driving cars at the DARPA Grand Challenge in 2005, and the history of autonomy in research on artificial intelligence around 1990. The article concludes with remarks on microdecisions as a heuristic for the present. What is presented here is clearly only the first step of an in-depth analysis of the calculative and microtemporal processes taking place in autonomous cars. World modeling has many facets that cannot be discussed here—object recognition, neural networks, questions of latency and buffering, or trade-offs between speed and accuracy. Once its contours have become clear, concept of microdecisions is shown to be of value for further investigations of these capacities.

2 The microtemporality of decisions

The Society of Automotive Engineers distinguishes between five levels of automation with different degrees of autonomy (Fig. 1). With the basic driver support features of levels 1 and 2, ADAS systems can accomplish limited tasks, although the driver must constantly supervise the car—at level 1 in what is called hands on-mode, at level 2 in hands off-mode. Advanced automation starts at level 3, in which the car is able to regulate its behavior autonomously in specific situations in eyes off-mode. At level 4, a mind off-mode, the driver can focus on something else, and at level 5 there is no need for a driver—steering wheel optional. The degrees of automation in today's cars range from collision avoidance systems or lane assistants at levels 1 and 2 to recent prototypes of autopilots entailing a suite of interconnected ADAS systems for highway usage at level 2 or 3, and the first driverless shuttles at level 4 being operated commercially by Waymo in strictly geofenced areas in Arizona and California. Whether or not the use of fully autonomous vehicles (level 4 or 5) will ever be viable in all areas of the world is a matter of debate—and since 2019 more and more companies have begun to withdraw their market predictions as the complexities of the task have become clearer (Stilgoe 2020). Nonetheless, even if autonomous systems are not (yet) capable of totally replacing the driver, the relevant hardware is already installed in millions of cars and new software is constantly being developed. Semi-autonomous cars, which are already being produced by almost all companies, generate extreme amounts of data due to the multitude of sensors they possess, and they require powerful CPUs that process these data in real time. Even recent car models at lower levels are sophisticated computer systems.

Fig. 1
figure 1

SAE levels

To gain an understanding of autonomous cars, it no longer suffices to view them as vehicles of traffic that serve the demands of transportation. They are not only combustion or electric engines but also media complexes, supercomputers with a host of different interfaces, highly developed adaptive systems, machines for data processing, and context-sensitive environmental technologies. Equipped with machine learning, they are able to react to the requirements of their environment and the behavior of passengers as well as that of passers-by; they both anticipate possible events and project estimates into the future by modeling their surroundings in different degrees of granularity. Machine learning is one element of this process, but regarding the recent hype, it is important to keep in mind that the process of generating an alternativity for decisions is not dependent upon machine learning.

All ADAS systems and their complex interrelations are meant to optimize interactions with the environment, but the environment with which an autonomous system operates is never a given and isomorphic representation of the world. Rather, at least for more complex combinations of ADAS systems on higher levels of automation, the environment is a fragmentary and operational model that has been created by a specific correlation of sensors with different capabilities, filter algorithms that analyze sensor data, processes of machine learning that optimize pattern recognition, and operating decision modules. By focusing on this intersection of different technologies, this paper explores the epistemological constellation in which the autonomous car's environment is constituted by processes of world modeling that, as is shown below, incorporate probabilities that are fundamental for microdecisions. Microdecisions are decisions within and for the model, even if their results can be perceived as actions of the car. But it is not possible to deduce the process leading to the action from the observation. It is necessary to describe microdecisions on a different scale and with reference to their microtemporality.

In the decision modules of autonomous cars, information from different sources is collected. Use is made of not only algorithmically filtered sensor data about the environment but also odometric data about speed and acceleration, localization, routing, maps, traffic information, etc. This module is tasked with merging this information and extracting decisions that meet requirements of safety, security, and reliability. Due to the heterogeneity of these data, formalizing this process is extremely challenging. At the core of these technologies lies a mutual dependency of decisions, probabilities, and the virtuality of world models. This dependency is central to how these technologies transform the world in which we live. For this reason, it is important to understand at least the basic technological procedures employed here, instead of taking them for granted, and consequently to go into some details that might initially seem obscure. This 'close reading of technology' is not meant to imply any kind of determinism. It is justified because it demonstrates that the core of these technologies cannot be understood with a solely technical vocabulary. Autonomous technologies bring forth a new, yet-to-be-analyzed and fundamentally political constellation of microtemporalities and environmental spaces, of probabilities, data, and decisions.

Within the complex and manifold interrelations of the technical elements that constitute an autonomous car, microdecisions may have different functions. As the following example show, modules for decision-making can be specifically assigned with the task of mediating between the environment, the car’s reactions, and its sensory input. Decision-making and algorithmic systems installed in cars are, of course, proprietary and not accessible to the public. To provide an overview of their functionality, I will refer to a research project called xDriver, developed by Zdzisław Kowalczuk and Michal Czubenko at the University of Gdansk, Poland (2011). This project, which consists of an algorithmic setup for a self-driving car, is not representative of the many different approaches to algorithmic decision-making in environmental technologies, especially because it tries to integrate affective computing into the autonomous system. It can, however, serve as an example for the general structure of decision algorithms. The project’s main research objective is to develop an autonomous system of decision-making by modeling cognitive and psychological attributes of human drivers. This system is highly adaptive to changing environmental conditions, because constant feedback-loops between its components “calculate the estimates of the impact factor of prospective reactions.” (Czubenko et al. 2015: 574). Its basic sensor-based decision algorithm is shown in Fig. 2.

Fig. 2
figure 2

Intelligent system of decision-making (Czubenko et al. 2015: 572)

As the algorithmic flow-chart in the figure shows, decision-making is one module of a set of different interconnected tasks that depend upon each other in strictly determined ways. Such an algorithm, understood as a set of rules to be followed to transform an input into an output, consists of dozens of subroutines that manage different tasks and depend upon input from the environment. Of interest in this context is that the module “optimization of decision” is integrated into two interrelated feedback loops with the environment: As a consequence of the decision, the current state of the car is modulated by a reaction. This change of status is instantly fed back into the decision-making process: the car constantly monitors the effects of its decisions on its own status which covary with changes in the scenario. These changes, caused by the car, are part of the second, larger feedback circle in which the car perceives the environment, builds a virtual model and comes to a decision based on projections about the status of the identified objects.

This twofold involvement of the decision into two mutually dependent feedback loops shows that decision-making is not only part of a larger ensemble of algorithmic patterns, but that the module in which the status of the car and its reactions are woven into the environment also depend upon continuous input from the outside. As a result the current state and the decision about the reaction are always temporally connected. The decision is based on an already past state of the environment which is then directly fed back into the decision for the next cycle. What seems trivial becomes decisive once we consider the microtemporality of these processes. The mutual dependency of all influencing factors results in a constant re-adjustment of its decision-based behavior.

The autonomous system must take a model of the car's environment as a basis to make numerous decisions to solve specific situations: braking or not braking, turning right or left, changing lanes or not, and—in the future—perhaps even passing another vehicle or not. These acts cannot be understood solely as the execution of deterministic algorithms that always provide the same output with the same input by specifying a threshold at which a defined reaction must be triggered in the case of a specific event. Nor can they be explained by reference to an instance of consciousness that decides on the basis of knowledge, experience, or perception. Rather, if we stipulate that autonomy in this context means that a car is able to adapt to environmental challenges, then any conception of algorithm-based solutions becomes problematic. Autonomous systems need to have a multiplicity of exit-points linked to specific sequences of interaction. Exit-points are facilitated by algorithmic if–then conditions that evaluate external data compiled by sensors. To take an algorithmic view implies that if the system is exposed to the same conditions, it will react in exactly the same way. However, in a car, an autonomous system must be able to respond to all relevant events in the environment. Its autonomy requires that there be several options open to it. Otherwise, the system would be deterministic, not operationally or strategically autonomous, and would not be able to adaptively adjust to unpredictable and unsafe environments. In fact, the algorithms of autonomous systems include probabilistic models because deterministic responses for all scenarios are, in fact and in principle, quite impossible. The unpredictability of the environment necessitates a probabilistic approach to algorithms that allow for self-adjustment and can have multiple inputs, for example neural networks or Bayesian networks with decision trees that determine different outcomes based on states of the system.

Consequently, it becomes difficult to conceive of autonomous decision-making as the transformation of input into output on the basis of algorithms such as if–then conditions, decision trees, or flow-charts. Probabilistic algorithms and predictive neural networks are well suited to adapt to variable inputs and environmental uncertainties. Nonetheless, as I want to argue, it is important to describe these processes on a different level because, in the simplest possible terms, (micro)decisions cannot be epistemologically reduced to algorithms. Even if we take into account that algorithms are not opposed to probabilistic and predictive models, they are temporally non-specific and not bound to a specific medium. For this reason, I suggest that we find a new language of description and a heuristic instrument that would take into account the openness and nondeterminateness necessary for decisions.Footnote 3

Decisions require time. If a decision happens immediately, it would have been decided in advance and it would be deterministic. Temporality is, thus, key to understanding how these technologies operate. The temporality of microdecisions is an effect of the sheer mass of calculations and the speed of automated processing. Microdecisions exceed human capacities because their numbers and speed can only be accomplished by computers: their quantity is their quality. The algorithmic processes that underlie microdecisions can in principle also be performed by humans because, as formalizations, algorithms can be processed by a machine or a human at any speed with exactly the same results. By definition, time is not critical for algorithms, only for the technologies that apply them. An algorithm returns the same results, regardless of whether it is processed by a human brain or by a digital computer, of whether it takes a microsecond or an hour. But the medium-specific speed and quantity of microdecisions are not substitutable. Time is critical for microdecisions. They need to be performed at levels below human capabilities. Like their temporality, their scope, too, is always a question of sheer computational power, which enables microtemporalities in which decisions can be decoupled from human agency. As all computational processes, they only become effective when processed in times and quantities that are inaccessible to humans. If decisions are drawn in this microtemporality, it becomes necessary to negotiate what it means to decide and to be sovereign.

To explain this microtemporal dimension of automated decision-making and the reciprocal and recursive entanglement of autonomous technologies with their environments, I would like to provide an example that helps reconstruct the framework of alternativity underlying microdecisions. In September 2016, the car manufacturer Tesla released Update 8.0 for the operating system of the onboard autopilot limited to highway applications. The update included a new algorithm for processing the signals received from the vehicle’s built-in radar. The fact that Tesla speaks of an “autopilot” is not to be mistaken as evidence of self-driving capacities at level 4 or 5. Rather, autopilot here means the combination of different ADAS systems: collision warning, autosteer lane centering, self-parking, automatic lane changes, the ability to summon the car, and adaptive cruise control. In this regard, Tesla offers a good example of the constraints, challenges, and potentials of driving assistants, although it is certainly not the only competitor in the field.

Manufactured by Bosch, the mid-range radar sensor (MRR), which is mounted on the underside of the car, is also used by other automakers. Tesla, though, was the first company to introduce a new feature: with algorithms optimized by machine-learning, the on-board processor of the 2016 Model X, an nVidia DRIVE PX2, extends its data analysis potential to the motion of the car driving in front of the Tesla. Beginning with this update, the system uses the fact that the radar signal is reflected between the underbody of the preceding car and the road surface to detect the approximate shape and movement of objects in front of it, even if they are invisible to the driver and the car’s visual sensors (Tesla 2016).

This new application does not rely on improved hardware but on optimized algorithmic processing of available sensor data. It extends the car's reaction radius to events that are invisible to the driver. This implies that the temporality of the intervals of intervention between the car and the driver does not coincide. Two months after the update, a high-speed accident occurred between two non-Teslas on a motorway in the Netherlands, fortunately without any injuries. A dashcam in an uninvolved Tesla Model X recorded the accident. The video, which the driver published online, shows how the Tesla’s onboard autopilot reacts to the imminent collision of the two cars in front of it, even before the collision takes place.Footnote 4 The Tesla brakes automatically, before the driver even has the chance to recognize that something is going to happen, let alone intervene. A warning signal can be heard in the video and the car starts to slow down, but at this moment nothing unusual can yet to be seen on the highway. Seconds later, we realize that the Tesla has, by means of the new algorithm, predicted the collision before it happened, calculating the speed and movement of the vehicle that was two cars in front of the Tesla and thus invisible to the Tesla’s driver. Had the Tesla not responded automatically in this short interval, it might have driven directly into the resulting accident.

This interval of intervention is accessible only to the car, not to the driver. Only after the event do we understand that not only the time to react was below the threshold of human attention and responsiveness—though the driver could potentially anticipate the event—but also that the driver could not have reacted because he could not see anything. The car, on the other hand, anticipates an accident that will only be visible to the driver in its consequences. In other words, the intervals of intervention of the car and the driver do not overlap. The car's algorithms calculate the likelihood of a future in which it would be involved in the accident. In an extremely short time span, far shorter than any possible human response time, it must decide, based on sensor data, between this future and a response that will help avoid this future. The autonomous car brakes before the accident even happens, let alone becomes visible to the driver. It responds to a potential event because its ADAS systems compare the speed and direction of all three cars, calculating the likelihood of the accident and reacting accordingly. This reaction results from a deterministic consequence of the calculation of the probability of the collision. The algorithms are deterministic in their process, i.e., they always deliver the same output with the same input. Hence, if the threshold is exceeded, the ADAS systems, depending on other variables such as weather, road surface, cars driving behind it, initiate the braking process.

2.1 The probability of world models (SLAM)

In spite of the givenness of the reaction, it must be possible, in principle, for a decision to be different because only then would the autonomous system be able to interact with an environment that continuously demands new adaptations of behavior. If the vehicle had no alternatives, it would not be able to respond to, and interact with, its unpredictable environment. Its autonomy necessitates not only several exit-points but an openness toward other options and an implementation of a probability calculus on the level of its technological architecture. The key questions are where, in such a process, microdecisions are drawn, how alternatives are created technically, and how alternatives are compared to each other. In the following, I will try to answer these questions by referring to one specific architecture of autonomous systems. There are other architectures, but the examples described here allow to further refine the heuristic of microdecisions (Fig. 3).

Fig. 3
figure 3

Example for an architecture of autonomous cars (Matthaei and Maurer 2015: 159)

To understand how autonomous vehicles interact with their environments, and thereby project the alternativity that underlies microdecisions, it is important to distinguish between a strategic, a tactical, and an operational level. This subdivision and the corresponding illustration were developed in the context of a research project on the architecture of self-driving cars at the Technical University Braunschweig in Germany. The three levels correspond to the top three rows. Below that is the level of sensor-based data acquisition. While the strategic level is concerned with the navigation of the car between two locations and the operational level is concerned with the execution of driving maneuvers, the tactical level encompasses methods of locating the car in its surroundings and analyzing the situation. Algorithms are inevitably involved at all levels. At the strategic level, they consist, for instance, in calculating routes, and at the operational level, in steering and maneuvering the vehicle. But at the tactical level, algorithms are used to create a probability of possible world models and corresponding options for action.

The figure shows how sensor data flows into the “Feature Abstraction and Model-Based Filtering” module and from there further into “Context/Scene Modeling” and “Road-Level Environmental Modeling.” On the left side, among the externally supplied data, which include street maps or traffic reports, there are three levels of world modeling. Modeling—whether of worlds or scenes—in this context is not the complete representation of the outside world in the mind of the machine, as it were, but the assembly of fragmented sensor data into a viable, i.e., operable, model of the environmental factors relevant to the system.Footnote 5 Any modification or reintroduction of sensors requires an adaptation of these algorithms—as in the Tesla example, by a new filtering of radar data. The different capacities of sensors must be adjusted for the detection of the environment. For example, lidar or “light detection and ranging” (which is not used by Tesla because it is very expensive) uses point-clouds of lasers to accurately capture three-dimensional models. Lidar has a limited range, but it determines the contours of nearby objects by the measurement of the distance to these objects, while optical cameras are unreliable in close-up situations but provide a long range. In this regard, each sensor technology collects data constituting a sensor-specific environment, and these data can then be merged with data about other environments. This process is called world modeling. Depending on the equipment of the vehicle, the applications range from the calculation of the distance to other road users to 360-degree modeling.

There are different approaches to modeling and I will focus here on one specific method, SLAM that was important for the historical development of the first self-driving cars. This approach has the advantage that it is well documented, while many other technologies are proprietary. Regarding current developments, modeling for operational design domains (ODDs) is a standard approach. The OOD approach determines the conditions of operating the car in advance. It consists of pre-given spatial and operational limits based on incremented maps and pre-collected data of the strictly defined domains in which an autonomous car is supposed to operate. It also defines situations in which the car cannot operate safely without the driver's support. These domains can consist of a geofenced area, an approach pursued by Waymo and Uber for example, or for specific modes of behavior, for example driving on a highway or in bad weather. If the conditions are not met, the car returns the driving tasks to the driver. For robotics—the historical context of the development of SLAM—the challenge consists in engaging with uncertain and unknown environments in general, something that is replaced in ODDs by prefabricated maps and definitions. SLAM, in contrast, was developed to find a solution for environmental uncertainty and has also conceptually contributed to a new understanding of autonomy. While ODD might include SLAM modeling for given situations, the originary claim of SLAM is to operate under conditions of uncertainty (for example in extraterrestrial terrain).

Scene or world modeling in each case includes procedures that—in different degrees of granularity—contextualize the isolated sensory data about the environment and the condition of the vehicle, interweave the model of a world, and locate the vehicle. As I will explain in the following, a model created with the help of SLAM is neither identical to the topological space nor a representation of objects but extracted from data about the probabilities of states and events that are themselves extracted from the constantly changing relations between the car and that what is registered by its sensors. The car operates on nothing else than probabilities and has no access to an “objective world” that could be represented as a map. As it will turn out, the mapping of a robot's environment, which is the preliminary step of every world model, is itself an act of calculating probabilities. As a model of probabilities, the vehicle's world model is virtual in the sense that each probability encompasses a multiplicity of alternate futures. World modeling, in the context of such technologies, is always the modeling of possible worlds that are merged in a single model that contains them virtually.

To move in the complex environments of road traffic, an autonomous vehicle must continuously register the states—shape, position, and movement—of the surrounding objects and locate itself in relation to them. It does not have access to a view from the outside but must constantly recalculate its own location and possible reactions to its environment. Since both the vehicle and other road users are mobile, the environmental relationships are constantly changing. Because the vehicle cannot know which position it is currently in, neither its location nor its relationship to other objects are given. In other words, the technical challenge consists of a safe approach to the uncertainty of the environment combined with the unpredictability of the behavior of other road users. Safety in dealing with this double uncertainty (of the system over its own state as well as the environment) is a key component of the strategic and operational autonomy of the vehicle.

The problem of environmental orientation that I am raising here was discussed in robotics and artificial intelligence research around 1990 under the name of “simultaneous location and mapping” (SLAM). This research posited that the initial condition of a robot is its lack of information about its environment (Durrant-Whyte 1987; Smith and Cheeseman 1986). This environment can be mapped only by moving and collecting data with a robot’s available sensors, but these sensors are just as prone to error as the odometry of the robot’s own state. All the data that the robot registers about its environment are relative to its own position and, therefore, dependent on its location, which in turn is needed to locate itself on the map that is to be created. As the robot moves and measures its environment both at the point of origin and during movement, it acquires different sets of data about the environment depending on its positions and the available sensors. These datasets can then be compared by probabilistic techniques that broadly fall under the rubric of Bayesian filters, named after the eighteenth-century mathematician Thomas Bayes. This results in probability values ​​for the new position of the robot as well as for the shape, position, and movement of surrounding objects. As Max Kanderske and Tristan Thielmann write, the "model of the world cannot be deterministic (it cannot be computed with certainty from an initial state), but has to be probabilistic" (Kanderske and Thielmann 2019: 121). All data collected by the robot is relative to its position, and its position can only be estimated in relation to the environment. In short, in order to locate itself, the robot needs a map, and to create a map, it must locate itself (Burgard et al. 2016: 1134). The tasks can only be solved simultaneously: location and mapping are inseparable. On the basis of the collected data, an always fragmentary map or model is constructed. This map only contains probability values. It is a construction of a possible, but probable, world.

This uncertainty, though, can be transformed into an operational probabilistic. In an authoritative 1987 essay, “Uncertain Geometry in Robotics,” engineer Hugh F. Durrant-Whyte (1987) suggests that environmental uncertainty can be processed by algorithms with the help of probabilistic methods. Around 1990, various algorithmic techniques emerged in robotics to solve this problem of dealing with uncertainty (e.g., Kalman filter, FastSLAM, particle filter localization). All of them attempt to merge data collected from available sensors into a world or scene model. The fact that the word “belief” is also often used to denote this model demonstrates that the autonomous system is deemed neither to achieve objective knowledge nor to accurately determine its situation (Thrun et al. 2006a, b: 3). The challenge is that the contours captured by sensors change with each movement of the robot. Accordingly, the model is always bound to the site of sensory observation and has an operative function: “The world modeling system serves as a virtual demonstrative model of the environment for the whole autonomous system” (Beyerer et al. 2010: 138). The model is a map that does not represent the outside world; rather, it shows what is relevant in relation to the robot with respect to what is registered by its sensors.

Because of the SLAM problem, the world or scene models consist of nothing other than probability values about the attributes of the environment (plus, eventually, externally supplied data about traffic, roadmaps, or GPS, which are not accurate enough for handling local navigation). These methods mark a conception of the environment as technically permeated by uncertainty. This uncertainty has two components. It means both the robot’s ignorance regarding its position and the unpredictability of the environment’s dynamics. This duplication of uncertainty is the epistemological core of SLAM.

These methods combine sensory detection and filtering algorithms to solve the SLAM problem. Based on the data about environmental relations, the central processing unit of the robot calculates, through mathematical filters, the probability of the possible positions and movements of the tracked objects in space (Thrun et al. 1998; Matthaei and Maurer 2015). Algorithmic filters are based on the comparison of sensor data collected at different positions and at different times. The superimposition of these data results in probabilities. Roughly speaking, the analysis of sensor data is accomplished by calculating probable attributes of the environment as well as dynamics of the movement by means of Bayesian filters from the measured distances and contours of objects at different times in relation to the robot. A Bayesian filter compares the model of the environment at time t−1 with the sensor data measured from another position at time t. Referring to the superposition of both measurements, the robot calculates the probability of its localization as well as of objects in the environment. Since the present t can always turn out to be different from the future calculated from the past t−1, this probabilistic approach corresponds to the virtuality of a possible but probable world. The identification of specific objects, such as children or traffic signs on the roadside, as well as the calculation of appropriate responses predominantly undertaken by optical cameras and algorithms optimized by machine learning, are only a secondary step to this virtualization of the environment.Footnote 6

Mobile autonomous systems did not become operational until around 2005, when computing capacities were sufficient to optimize algorithms through machine learning and to evaluate in real time data acquired by improved sensors, in particular the new lidar method. At the cutting edge of these technologies were the three Grand Challenges launched in 2004, 2005, and 2007 by the Defense Advanced Research Projects Agency (DARPA) and the US Department of Defense (Iagnemma and Buehler 2006). The challenge of the first two races was for an autonomous car to drive a predetermined route through the Californian Mojave Desert without a driver or remote control. In the first DARPA Grand Challenge in 2004, the most successful vehicle, a prototype from Carnegie Mellon University, only managed twelve of the 227 km. Just one year later, the autonomous car Stanley, developed by the Stanford Racing Team and the engineer and AI researcher Sebastian Thrun, won the second challenge, applying the technical convergence of new sensor technologies with probabilistic algorithms and machine learning as a standard procedure. Today, this approach dominates the development of autonomous cars, robots, and drones in constantly evolving versions.

The prototype Stanley, whose further development Junior took second place two years in the DARPA Urban Challenge, employs a whole range of sensors: a rooftop mounted rotary lidar module, optical cameras, a radar, and a GPS module. The first step in these early attempts was to synthesize the data from these different sensors by the algorithmic filters of the SLAM method into a world model and, in the second step, to optimize the maneuvering by means of the new possibilities of machine learning (Thrun et al. 2006a, b). This double approach replaced the need for an a priori topological map and allowed maneuvering and navigation even in rough terrain or in the presence of other road users whose behavior cannot be predicted.

Since the DARPA Grand Challenges, algorithmic methods of localization and systems of spatial tracking that are used in autonomous cars as a sensory composite of optical, infrared, ultrasound, and thermal imagers, together with sonar, radar, laser, and lidar, do not simply represent the captured space but rather register the outlines and distances of objects through different wave spectra. However, since both the vehicle and the objects are potentially mobile, no attempt is made to specify more than probabilities of their position. These sensors provide relations between the vehicle and its surroundings, which, according to SLAM, can only be registered by comparison at different times and at different positions, which is to say, registered by and through movement. The complex sensors of these technologies are not focused on the mapping of an ontology in which all objects are registered on the basis of given coordinates as on a virtual map. Rather, objects are registered via the constantly changing relations of a virtual environment including probabilities and improbabilities. This virtuality is transformed into an operational model, which in turn enables time-critical interaction based on microdecisions that are bound to probabilities. Out of the mass of probabilities, these decisions choose the probabilities that are relevant for the car and necessitate specific behavior. Autonomy, in this context, is the capacity to choose options. It necessarily depends upon technologies that bring forth these options und thus instantiate microdecisions.

Even if the ADAS systems of today's cars do not operate on world models as interconnected and complex as the DARPA example (no 360-degree modeling, for example), the localized and situated sensing that is necessary for ADAS systems nonetheless operates on the basis of probabilities of the state of the environment. The worlds that are modeled are not representations but, rather, they are necessarily constructions of what may be relevant to operate under given conditions of uncertainty.

3 Virtuality, alternativity, and autonomy

Autonomous systems of environmental technologies are characterized at the operational level by the ratio of input and output, i.e., algorithmic processing of sensor data, but at the tactical level by virtualizing their environment through a probability calculus and Bayesian filters that locate the robot and map the environment. As a result of these methods, the world model contains the probability (and improbability) of possible states and events of the car's surroundings. Every probability contains a possible option for adaptive behavior. This ability to adapt can be understood as the system's autonomy. Autonomy in this sense is, on the one hand, conceptually interwoven with the virtuality of the respective world models and the necessity of an alternativity of options for adaptive behavior.Footnote 7 And on the other hand, this conception of autonomy is historically connected to a certain strand of research on artificial intelligence from around 1990. A closer look at this research reveals how the creation of worlds and autonomy are conceptually related.

Insofar as an autonomous car does not have direct access to the world, it uses the sensor technologies and algorithms mentioned so far to create its own environment—or rather a model that encompasses a multiplicity of possible environments in the state of probability. The difference between a “real world” and a “synthetic world” blurs because for the vehicle there is only the world of probability. The world model is the environment with which the car operates. In this sense, the world model is virtual and operational at the same time. It is bound to a multiplicity of futures of possible states and events, while this alternativity is the basis of microdecisions in the present. By means of this virtuality, in which probability and possibility are coupled to each other, the past is evaluated to anticipate the future and to make time-critical decisions in the present that allow or prevent this future. In this sense, the virtual environment created by these technologies is directly interwoven with the adaptive capabilities of the car.

Due to its probabilistic modes of operation, the vehicle works with world models that could always be different, thus opening possible—and therefore potentially different—decisions. This openness is the basis of its relational entanglement with the environment. Alternatives are intrinsic to probabilities because probabilities implicate both what might be and what might not be. These alternatives are the prerequisite of microdecisions that are always located at the tactical rather than the operational level of autonomous cars. Not only is this virtuality the basis of decisions, but it carries their temporality and quantity in itself. In this context, every quantified possibility of a state or event incorporates a microdecision. It is in this regard that these virtual, sensor-based, and algorithmically computed models of the environment created through SLAM differ radically from other models and simulations. Microdecisions are not external procedures that are executed independently of modeling; they are immanent to the autonomy of the system because—in this context—they are made on the basis of these probabilities.

The modeling of the environment on the tactical level cannot be explained with algorithmic processing alone because the calculated values of probability always contain improbabilities and thus possible other worlds, ergo alternative options that are the precondition of the system's autonomy. It is precisely this definition of autonomy as the disposition of possible worlds that, in the field of robotics around 1990, constituted the basis for a new research paradigm on what at that time was called “artificial life.” This understanding still frames the current escalation of autonomous technologies that are not self-determined but able to adapt to constantly changing conditions. In 1991, mathematician Paul Bourgine and biologist Francisco Varela, known as the founder of the theory of autopoiesis with Humberto Maturana, wrote in their introduction to the much-cited anthology Towards a Practice of Autonomous Systems: “Autonomy in this context refers to their [the autonomous system’s] basic and fundamental capacity to be, to assert their existence and to bring forth a world that is significant and pertinent without being pre-digested in advance” (1992: XIII). Bourgine and Varela articulate a critique of representationalist models of artificial intelligence, which tend to endow the robot with a world defined by the researcher, in favor of connectivist explanations, neural networks, and a constructivist epistemology.

For Bourgine and Varela, autonomy—both of biological and of technological systems—not only results in adequate problem-solving behavior but also in the creation of worlds, i.e., in different interpretations of the environment for the observer. For an external observer, worlds are map models in which the environment is part of the organism's behavior or the operating robot’s changing probabilities and improbabilities that enact structural coupling. Due to its autonomy, the organization of the system is able to change itself structurally in relation to the environment, while remaining operationally closed. From the point of view of an autonomous system, the environment appears as an independent source of perturbation. For an external observer, the system and the environment mutually and processually transform each other because they are structurally coupled.

The system is capable of creating new internal worlds because it has no access to the outside world—just as an autonomous vehicle only has sensor data from which probabilities are calculated. Operationally, these worlds emerge from the fact that the autonomous system for adapting to the fluctuating conditions of the environment must always have several possible options of behavior open to it. Consequently, there are different possibilities for how it may be restructured in the process of its adaptation. To describe this characteristic, Bourgine and Varela speak of viability. They use this term to indicate the fact, when faced with an "unpredictable or unspecified environment" (Bourgine and Varela 1992: XIII), an autonomous system does not develop a determinate, single option to solve its challenges, by dynamically organizing its closure as a system, but rather a plurality of possible solutions. Without this openness, the system would not be autonomous.

Openness here does not mean that the system can freely choose, but that its organization is adapted to the environment in a variable yet non-arbitrary way. Uncertainty and unpredictability do not prevent autonomy but are required for the system to adapt to its environment. Unlike algorithms implemented to solve problems, an autonomous vehicle is tied to what can be called a nondeterministic path dependency: the system has several solutions to choose from. Microdecisions take place at the nodes which generate potential for difference by creating alternativity. This openness guarantees its continuity not because it is fixed to determinism but because its coupling with the environment means that only certain options are available.

The time-critical calculations of possible reactions necessary for microdecisions do not simply provide a result and trigger a response but are the basis of a dynamic interaction with the environment. Even if the response of the Tesla in the situation described in the first part of this paper is determined in reaction to the accident by an if–then condition, the dynamics of its openness to the uncertainty of the environment are preserved. Accordingly, it is necessary to investigate the practices and politics of microdecisions in their media specificity as well as their genealogy and epistemology, to understand the impact of autonomous technologies on the worlds in which we live.

4 Microdecisions and autonomy: a heuristic for the present

This article set out to explore the adaptive capacities of autonomous technical systems in complex environments. It encountered the problem that the common vocabulary of algorithmic processes—even if they contain probabilistic elements—is not well suited to describe the complexity of world modeling based on sensor data and filter algorithms. Instead, it proposed the term “microdecision” to grasp the nondeterminateness that is necessary for adaptive behavior in a constantly changing environment. In the final section, I would like to draw some more general conclusions about the theoretical consequences of this change in conception that opens up a perspective.

Microdecisions are not associated with individual decision-makers; rather, they are effective precisely because they take place automatically—in incomprehensible numbers and as quickly as possible—according to a fixed set of rules. The act of microtemporal deciding is not bound to a sovereign decider but is located at the intersection of sensor data, filter algorithms, and the resulting probabilities. Microdecisions represent the smallest unit and the technical precondition of a present-day politics of autonomous technologies. From this perspective, the complexity of the adaptive entanglement of autonomous technologies with their environments, and the recursions in which both sides are linked, is the starting point of a description that refuses to treat the environment as passive material that is subject to active processing of the autonomous system. Rather, they are bi-directionally coupled by microdecisions that constitute and draw on the probabilistic model. It would be a mistake to understand the microdecisions of autonomous automata merely as an implementation of what has already been decided because in that case the virtuality that is the basis for the operation of microdecisions falls out of view. The concept of virtuality adapted here makes it possible to describe the status of a model that is neither actual nor potential, but rather probabilistic. Because it is probabilistic, however, this virtuality contains the possibility of its improbability— the state or event could also prove incorrect or nonexistent in the future.

If the concept of decision is detached from living decision-makers and autonomous systems are described with a strong understanding of decision as something that takes part in creating worlds, then the question of sovereignty—and of the sovereign as the one who decides—inevitably arises: What does sovereignty mean under digital conditions? How must we reframe the place of the one who decides when there is no one who decides? For macrodecisions, the power to decide consists of the legal and political acts that prescribe fixed rules—for example, that the life of bystanders is to be valued above the life of the occupants of an autonomous vehicle. On the mesoscale, decisions are individual acts in specific situations. Microdecisions cannot be delegated to human decision-makers and can never be sovereign in the sense of an intentional use of power, even if they are decisive.

At this point, the history of the concept of the automaton may be helpful. Originally, the term referred to mechanical statues that were able to imitate movements. In Aristotle's Politics, where the term appears in reference to examples from the Odyssey, the automaton is considered to be a tool that is not sovereign. The automaton is situated on the same level as slaves in an instrumental order; and similarly to slaves, the automaton is not allowed to make decisions (2009: 197b12f). In this conception of politics, sovereignty is tied to decision-making and everyone who is not able to decide is excluded from politics. The one who decides becomes sovereign by deciding about his fate. Neither the automaton nor the human slave are sovereign, and consequently neither are a political being inasmuch as they cannot decide. Although this Aristotelian term is not suitable for the description of modern technologies, it clarifies what it means to assign to them powers of decision: these technologies can then no longer be described as slaves or tools that only perform given tasks in a deterministic manner. The appearance of an automaton that makes decisions suggests that this concept of sovereignty needs to be adjusted—with consequences for the constitution of political subjectivity. The one who decides might not have a place at all when there is, as in the examples discussed so far, a cognitive assemblage of nonsovereign (in the sense of automaton), but sovereign (in the sense of decidability) decision-makers.

To get closer to the heart of this question, I would like to comment on another text by Aristotle. In his Physics, he devises a less frequently considered distinction between two types of chance. Aristotle describes tyche as the unintended but potentially intentional, nonnecessary accident, and automaton as the fundamentally nonintentional coincidence (2008: 197b12). If someone throws a branch off a tree and hits someone without intention, it is coincidence in the sense of tyche. A branch that falls from a tree after a storm and hurts someone is a coincidence in the sense of automaton. Both events are nonnecessary and unpredictable. The event that is automaton cannot happen on purpose, as it is unrelated to someone's intention. Nonetheless, automaton is something that can be grasped by thinking—just as a die cast is unpredictable but will, necessarily always result in one of the six numbers and thus can be regarded as automatic. In the words of Yuk Hui, automaton designates something "that is already within the possibility of being itself." (Hui 2015: 129). In a similar sense, while the decision for one or the other option may be nondetermined, the options themselves are determined by the algorithmic framework. In the examples presented here, they are created by SLAM as an alternativity of possible worlds.

In contrast to automaton, tyche, though, remains contingent for Aristotle. This distinction also separates humans from inanimate machines: "Nothing done by an inanimate object, beast, or child is the outcome of luck, since such things are not capable of choosing." (2008: 197b8). Contrarily, coincidence in the sense of automaton also extends to animals and inanimate objects.

With today’s autonomous automata operating on the basis of microdecisions, it becomes necessary to rethink this relationship and to look at the “unintended intentions” of the automaton. Indeed, it may even be necessary to introduce a new category of automatic unpredictability and insecurity. An accident caused by an autonomous car is clearly not intended as an accident but is, rather, a consequence of an intention to act in a way that has been decided. Nonetheless, such an event in the sense of automaton is a matter of chance: it is nonnecessary and unpredictable.

Such automata are open to different possible options between which they have to decide without being sovereign in the sense of an intentional subject. If an autonomous car gains sovereignty—in the sense that, even momentarily, it subjects others to its sovereignty, as opposed to being subjected to the will of the driver—this transformation of power entails many political questions about platforms, digital capitalism, infrastructures and the economies of distributed and stacked technologies (Scholz 2013; Rossiter 2016). To further this perspective, the unintended effects of automata that are able to decide can be taken into consideration as components of environmental adaptation. Philosophically, this means that even though these automata lack both intentionality and sovereignty in any strong sense, their decision-making depends on complex configurations of technologies that create a multiplicity of possible worlds. These possible worlds are virtual in their specific probability. Such decision-making capabilities need to be explained without recourse to concepts of intentionality and intentional sovereignty; in fact, they challenge the binarisms within which these concepts are embedded. In this sense, microdecisions are not the implementations of what has been decided but, rather, are characterized by an openness to the contingent that cannot be grasped in purely algorithmic terms.

5 Conclusion

In this regard, my paper has presented the first steps toward a new conception of microdecisions and environmental adaptation. In reinterpreting the concept of decision, my first aim is not to limit the agency and autonomy of humans, but rather to attribute to autonomous systems (which may include humans) the capacities of "cognitive assemblages" (Hayles 2017). The many legal and ethical discussions provoked by autonomous technologies encounter this problem, but these perspectives rarely consider it on a conceptual level. The concept of microdecision allows us to attribute agency and autonomy to these technologies, without at the same time assigning them responsibility, which remains situated where macrodecisions are made. The important ethical questions that arise with the automation of traffic should be asked with this perspective in mind.

The second aim of this reinterpretation is to undermine the assumption of the determinateness of the behavior of these technologies. My reinterpretation replaces this assumption with a nondeterministic path dependency and the virtuality of alternatives. While macrodecisions can always be made differently at the level of programming and implementation, and human decisions depend upon the contingency of mental states and external influences, it is important to posit this openness to microdecision-making as well. A decision presupposes that there are alternatives, between which one chooses. This openness, which legitimizes the concept of decision, does not mean that a decision could be made freely. Openness is not in opposition to determinateness. Rather, the alternatives that are open to such a microdecision are part of a nondeterministic path dependency with different exit-points. Alternativity is the result, but not the condition of the technologies of environmental adaptation. The assumption of a determination of decisions through mechanistic processes leads to a simplified distinction between the determinate agency of the machine and the free agency of humans. To prevent such shortcomings, it is important to examine the demands and expectations of foreseeable behavior associated with an instrumental understanding of computability and algorithmizability.

Decisions, whether made by humans or by machines, are more than the execution of a predetermined process—they always draw a distinction between possibilities. Even in a technically and mathematically defined context, decision-making exceeds the execution of a predetermined protocol or a programmed algorithm. What this article offers is an alternative, non-technical language of description that, while taking algorithms into account, but enables a vocabulary that recognizes the unintentional sovereignty, temporality and medium-specificity of technologically based decision-making. In other words: to understand how autonomous technologies shape the world in which we and they interact, we need to overcome the assumption of their passivity, their lack of agency, and their determinateness. The term “microdecision” can help us move, in this regard, toward a more adequate conception of environmental systems with a kind of strategic or operational autonomy.

To simply delegate decisions to machines and then to conceive them as determined algorithms, as something that necessarily happens as it happens, is at core a depoliticizing act. In this regard, it is the task of critical humanities to politicize machines. This means to take the alternatives of each decision, be it fast or slow, as a starting point. Such a discussion should understand automata not as sovereign, but as decisive. As deciding automata, they achieve unintentional sovereignty. In this sense, the coming humanities of virtuality would have to look at the implementation of standards of decision-making through virtuality and the ambiguity of the probable, without explaining the technologies in question as simple performing agents or lapsing into the desire for an instrumental understanding of algorithmic computability. Automata and their environmental relations cannot be so conceived. Decisions have a non-computational dimension, even if they are processed by computers. Once one comes to think of probability in terms of ambiguity instead of uniqueness, it becomes clear that in the virtuality of world modeling, every probability carries the potential for its own improbability—the value of which it does not apply. The car's operations will still be real. However, the ambiguity of the probable and its exploration through the virtual humanities are important when deciding in which world and in which environment we want to live—beyond the scopes of human sensibility.