1 Introduction: from shared and cooperative control of situations to shared and cooperative control between humans and machines

In introducing how shared and cooperative control in human–machine systems are related, we feel it is important to realize that cooperation and tools have affected Homo sapiens already for a very long time. Tomasello (2014) describes how human cognition evolved and stresses that an essential element of the rapid evolution of Homo sapiens toward the most dominant species on this planet was the ability to develop a shared intentionality and to cooperate towards common goals. Although other species have this ability to some extent (Harcourt and de Waal 1992), H. sapiens excels in the complex cooperation with other members of its species, and also with different species. An example for this is the “cooperation” with other mammals like cattle, or the increasingly cooperative work with dogs, horses, elephants, etc. The interplay of new tools, cooperation and competition within the homo species and with other species was crucial for our species Marean (2015) describes how H. sapiens left Africa about 70,000 years ago and spread all over the planet. The key to success, the “ultimate weapon”, was not only the new deadly arrows and spears that this species had developed, but the ability to cooperate very closely within the own group, and very brutally against other mammals and other homo species outside of the own group (Fig. 1).

Fig. 1
figure 1

The ultimate weapon of Homo sapiens was not the spear, but cooperation (Marean/Scientific American 2015)

The scientific study of cooperation between humans is increasingly approached from the concept of “joint action”, which can be regarded as “any form of social interaction whereby two or more individuals coordinate their actions in space and time to bring about a change in the environment” (see, e.g., Sebanz et al. 2006). Cooperation between human individuals and groups of different sizes exists already since hundreds of thousands of years. Human–human cooperation has been under investigation in philosophy since millenniums, in psychology and sociology since hundreds of years, and still essential aspects (e.g., the importance of mirror neurons) have been unrevealed just recently (Rizzolatti and Sinigaglia 2008). It could have a high potential to use this knowledge also for the joint action and cooperation of human and machines.

Tools stimulated human–human cooperation, which in turn stimulated tool use and development of more complex tools: an essential element in our evolution. One of the earliest evidence of tool use is about 270,000 years: the first wooden spears, found in 1995 in Schoeningen. These first tools were an extension of physical power and mobility. Mechanical machines allowed us to harness the power of wind, water, and later—during the industrial revolution—steam and electricity. Since World War II, the advance of cheap and powerful computing and sensing power enabled us to develop tools with cognitive capabilities, capable to act automatically—albeit within boundaries. Norbert Wiener was the first to realize that human and machine would need to communicate for them to interact well (Wiener 1948). By having developed tools that can think and act, the intimate connection and interplay between humans and technology has come back in full circle: we now need to develop tools that we can cooperate with.

Our society is increasingly confronted with automation, not only in airplanes and behind fences in factories, but also in highly or fully automated vehicles (Tsugawa et al. 2000; Dickmanns and Zapp 1987; Parent and Daviet 1993; Thrun et al. 2006), and many foresee the advance of robot technology directly in our living environment. In 2019, we see a rush towards autonomous technology, which is seen by some in the community as a hype which will slide down a ‘slope of deflated expectations’ before it can climb up the ‘plateau of productivity’ (Panetta 2017). We expect that autonomy will be increasingly discussed under the aspect of controllability and cooperation. We also expect that assistance and automation are paradigms that will prevail and extend towards cooperative and symbiotic relationships between humans and machines.

Traditionally, there has been a distinction between assistance systems (where the machine only supports the human), and automation (where the machine is taking over the main task, replacing the human under certain conditions). Sheridan already recognized that the distinction should not be so black and white, and proposed the influential concept of levels of automation (Sheridan and Verplank 1978). It illustrates that many design options for human–automation interaction exist. There are many situations where both the human and the machine should act together at the same time, and where authority and tasks need be shifted or adapted (Sheridan 2011; Miller and Parasuraman 2003). These insights have led to much related theoretical concepts and design approaches known by a plethora of names, such as shared control, cooperative control, human–machine cooperation, cooperative automation, collaborative control, co-active design, robots, physical human–robot interaction, adaptive automation, and adaptable automation, etc. There is quite some overlap between these concepts and approaches, and the field suffers from a lack of consensus and definition (Abbink et al. 2018).

The authors of this paper have been particularly involved in automation, “shared control” and “human–machine cooperation” (sometimes also termed “human–machine collaboration”). Shared control stresses the fact that human and machine share control over a system together, (e.g., Griffiths and Gillespie 2004; Abbink 2006; Flemisch et al. 2010; Abbink et al. 2018) whereas human–machine cooperation stresses the fact that humans and machines share the same tasks and control a situation cooperatively (e.g., Hoc and Lemoine 1998; Hoc 2000; Biester 2008; Pacaux-Lemoine 2014; Flemisch et al. 2003, 2015; Johnson et al. 2014).

We firmly believe that shared control and human–machine cooperation have so many aspects in common that they should be analyzed and developed together. The goal of this paper is, therefore, to:

  • provide a clear overview of commonalities and differences in shared control and human–machine cooperation, and the links to other related concepts;

  • propose working definitions and conceptual models that show the connection between shared control and human–machine cooperation.

2 A brief overview of concepts and definitions: from influence and control to shared control

What is the most crucial point of our discussion about shared control and cooperation? A good starting point is that of defining “control”, and its weaker related concept “influence”. The essence of control is a strong enough influence of some parts of the world on other parts of the world. Control means having “the power to influence […] the course of events” (Oxford Dictionary 2016). Applied to human–machine systems, the common understanding might be even crisper: having control means to influence the situation so that it develops towards (or stays within) the preferences of the controlling agent.

In general and in an abstract perspective, the world (including natural systems and human–machine systems embedded in their environment) is not static, but dynamic: changing over time from one state or situation to another. A substantial part of this change then is influenced by actions of acting subsystems (or actors or agents), either natural (e.g., humans, animals) and/or artificial (e.g., machines), and their interplay with the environment. Based on (explicit or implicit) understanding of good or bad, i.e., desirable or less desirable situations (e.g., with the help of goals and/or motivations), agents perceive the world and influence the situation using their abilities to act, thereby controlling part of the world and forming (open or closed) control loops.

Applied to a concrete task, imagine somebody carrying a small table over a distance, without dropping it. We could certainly say that this person controls the movement of the table. Now imagine a second person joining the first in carrying the table (see Fig. 2). As soon as the second person joins, he or she also influences some part of the movement in a way that the situation develops or keeps in a certain way, e.g., not dropping the table to the ground or not bouncing with the table into an obstacle. Both persons share the physical load, share the control of the table, share the guidance or maneuvering, and share the task of safely navigating the table to another place.

Now consider a human–machine cooperation situation by replacing the second person with a machine. Let us use a simple machine at first, e.g., a small wagon that carries the load. We would certainly not say that the wagon is controlling the movement, but we might call it assisting with the physical load. Now imagine that the wagon is a robot also sensing the environment, and trying not to bump the table into obstacles. In this case, we would talk about sharing of control between human and machine.

Now imagine an additional person in the room observing the other two actors, whether they are both humans or one human and one machine, carrying the table (see Fig. 2). The person can overlook the situation and give guidance like “turn around”, “you first” or “let me first open the door”. Here comes an interesting fork in the understanding of control: if we would understand “control” in a very broad sense, and assume that the influence of this person is so strong that it really develops in the preferred way of this person, we might be tempted to speak of shared control even here. If there is a strict hierarchy of command, (e.g., soldiers that obey the commands of the superior officer), we might be tempted to speak of shared control, even if the soldiers would speak of ‘command & control’. However, in the technical world, shared control is mainly understood as a physical control, e.g., haptic with a control device. To avoid confusion, an extension of shared control beyond haptic contact, e.g., also including voice communication, should be explicitly explained as an extension of the shared control concept (Abbink et al. 2018).

2.1 From shared control to shared control, guidance and navigation

In general, shared control seems to be mainly used to describe an action with a very direct impact on the world, e.g., by controlling a movement. This can be described by the term “operational”, as part of a system of layers based on the level of cognition of the task, that decomposes tasks into operational, tactical and strategic/navigational/plan (e.g., Mintzberg 1980; Michon 1985; Woods et al. 2004; see also Abbink 2006, Lemoine et al. 1996). In our example of the task force carrying the table, the two carrying the table would control the table operationally; the third person would influence the table tactically. Imagine that our heroes carry the table to a bus so that they can drive together to a scientific congress or to a concert in another city: they might have talked before about the strategy why and how the table is moved from A to B (strategic/navigational layer). In our understanding of shared control, it starts on the operational layer, and we might extend the concept towards shared tactics and shared strategy. Further examples of (low level) shared control can also be found in Mulder et al. (2012).

The word “control” is sometimes used similar to authority, e.g., Inagaki (1999, 2003) describes a “trading of authority”. Flemisch et al. (2012, 2017) try to integrate this and describe authority as a pre-requisite for control, which should be present before the actual control is performed. Extending this line of argumentation, it makes sense also to speak about sharing or trading of authority for control, guidance and navigation as a pre-requisite for shared control on these layers (Inagaki et al. 2018, Personal Communication; Pacaux-Lemoine and Flemisch 2018).

Quite close to the general layer concept, applied to movement a layer concept of control, guidance and navigation has become increasingly useful: in this concept, different layers of movement are differentiated regarding the time criticality of the influence. Control is often the most of the time critical and influences the stability, attitude and the direction. Guidance is often influencing the general direction and/or the next maneuver, navigation is route planning on an even longer time frame. As shared control originated in the domain of physical control of movement, we understand it not so much in the general sense of control, but as the time critical control, and can extend the concept of shared control to shared guidance and shared navigation. Now to the most crucial bridge that we are trying to build is from shared control to cooperation and all its derivatives like cooperative automation or cooperative guidance and control.

2.2 From shared control to cooperative guidance and control and human–machine cooperation

In general, “Cooperation” is derived from the Latin words “co” (together) and “operatio” (work, activity) and is understood to mean “working together” or “the action or process of working together towards common goals” (Oxford Dictionaries 2014). The use of the term ‘cooperation’ in the context of human–machine systems was suggested by Rasmussen (1983), Hollnagel and Woods (1983) and Sheridan (2002), elaborated for general human–machine cooperation, e.g., by Hoc and Lemoine (1998), and exemplified for vehicle control by, e.g., Flemisch et al. (2003); Biester (2008); Holzmann (2007); Flemisch et al. (2008a); Hakuli et al. (2009); (Onken and Schulte 2010). In the literature, the term collaboration is often used as a synonym. Since this term is negatively connoted in some languages (e.g., in Dutch and German), we do not explicitly use it here, but of course include all literature on collaboration.

Similar to the definition of cooperativeness in psychology, we define human–machine cooperativeness as a trait concerning the degree to which a machine is generally agreeable in its relations, behavior and interaction with humans, or better: complementary to human needs, as opposed to competitive, aggressively self-centered or hostile (Cloninger et al. 1993).

As Flemisch et al. (2014a, b) sketch, “it can be useful to see cooperation and cooperativeness not so much as a crisp definition, but as a cluster concept. The idea of a cluster concept goes back to Wittgenstein’s fundamental critics of classical definition theory, which is good to define logical concepts in mathematics and physics, but has severe limitations in defining complex issues. Wittgenstein explains this with the example of a “game”, which can be extremely difficult to define with classical definition theory. Instead of that, he proposes to define a concept with a list of attributes that are generally important for this specific concept, which was later refined to a concept of clusters (e.g. Gasking 1960)”. For cooperation or the quality of cooperativeness between humans and machines, Flemisch et al. (2014a, b) identify the following attributes:

  • sufficiently autonomous machine capabilities for higher levels of automation;

  • intuitive interaction with a sufficient outer compatibility between human and machine;

  • sufficient inner compatibility:

  • compatible goal and value systems;

  • compatible representation of action, e.g., a movement through space,

  • traceability and predictability of abilities and intents in both directions:

  • of the machine by the human;

  • of the human by the machine;

  • dynamic distribution of control/transitions in automation modes, e.g., in form of delegation and re-delegation of tasks or subtasks, which can also be called trading of control;

  • arbitration of conflicts, e.g., if there are different opinions/intentions between the partners;

  • adaptivity as dynamic balance of flexibility and stability of both the human and the technical subsystems.

In addition to that, Pacaux-Lemoine et al. (2011) describe know-how (to operate) and know-how-to-cooperate, e.g., via a common work space as a fundamental base of cooperation (Pacaux-Lemoine and Debernard 2000). The know-how is the agent’s ability to control a part of a process, while the know-how-to-cooperate is the agent’s ability to cooperate with other agents concerned by the process control and who have similar and/or complementary abilities. The know-how-to-cooperate allows building up a model of the other to identify and manage common goals or procedures, and to facilitate the activity of the other thanks to interference management (Hoc and Lemoine 1998). Such cooperative activity should be supported by a Common Work Space, a visual, sound and/or haptic interface that provides information from process or environment, but also about other agents’ current and future individual and cooperative activities. Therefore, Common Work Space supports team Situation Awareness (Millot and Pacaux-Lemoine 2013).

Cooperation supposes that a situation is shared between agents to be commonly aware of the current or future environment or process state. Situation awareness, shared, distributed and team situation awareness have been defined to support cooperation in the perception and the understanding of the situation (Endsley 1995; Salas et al. 1995; Shu and Furuta 2005; Salmon et al. 2008). This means that the agent that is able to update the common situation awareness may influence other agents by its/his/her own understanding of the situation (validation by others) or may trigger other agents’ reaction (disagreement, explanation asked by others).

3 How could shared control and human–machine cooperation relate to each other?

Loiselet and Hoc (2001) already distinguished “Cooperation in action”—which seems to be close to the shared control concept—, and cooperation in plan and meta-cooperation, which is not directly concerned by current control. Shared control seems to focus on the common task or function on the operational layer, e.g., the control layer, while cooperation adds the way to take increasingly into account the other agent and the other layers (see Fig. 4). More than to know what the other is doing, cooperation allows to have a model of the partner to know how it is possible to cooperate with it/him/her. A cooperative agent (that has know-how-to-cooperate) can gather information about the other, analyze this information to make a decision about their cooperation.

Fig. 2
figure 2

Everyday situation with joint action, shared control and human–human cooperation (Flemisch et al. 2016)

Such activity can again go directly back to the operational layer, e.g., with shared control. But such cooperative activity can and should be prepared at the tactical and strategic layers, and might even be prepared and maintained with communication not concerned about the operational, tactical or strategic layer at all.

With the example of our actors carrying the table together supported by others, it becomes clear that shared control and cooperation are not exclusive concepts, but are nested: the two actors on the table share the control, here with a haptic connection, and they cooperate with each other and with the third actor. Cooperation can include shared control, but there can be cooperation without shared control. Moreover, there could be shared control without (enough) cooperation, e.g., if one of the agents, acts below a certain threshold regarding the attributes of cooperativeness described above. The sharing and cooperating can also happen on other layers like the tactical and the strategic layers (see Fig. 4). Applied to vehicles, this can lead to shared and cooperative guidance and control, as described, e.g., by Altendorf et al. (2019) in this issue.

We also found it helpful to distinguish interaction that is especially about the task(s), on a strategic, tactical and operational layer, and interaction that is especially about the modus of the cooperation. In communication theory, this is called meta-communication, e.g., Bateson (1956). This correlates with Pacaux-Lemoine and Debernard (2000) concept of “know-how-to-cooperate” and “know-how”. To keep in line with the words operational, tactical and strategic(al), we call this cooperational, or meta-cooperation, which can include, e.g., communication about cooperation (i.e., meta-communication). This layer is transversal to the three other cooperation layers.

From a system perspective (see, e.g., Dekker 2014), the operational layer, e.g., control, can be seen as the ‘sharp end’ of a process, which is supported by the ‘blunt end’: tactical and strategic layers of the process (e.g., guidance and navigation). This can be compared to a spear, that human and machine hold jointly, navigate and guide together to control where the sharp tip the spear hits immediate reality (Fig. 3, left). It is very clear that as with a spear, the cooperation partners increase their chances of success, if sharp end and blunt end work together. If we want to develop shared control further, one promising direction is towards more cooperativeness. If we want to bring cooperativeness directly into action, shared control can be a promising option.

Fig. 3
figure 3

adapted from Flemisch et al. 2016)

Metaphor for the relationship between shared control and cooperative control: a human and a machine agent (computer) cooperate by jointly holding the “blunt end” of the functionality to control its “sharp end”, where the functionality ‘hits reality’ (

Also captured in Fig. 4 is the position of Abbink et al. (2018) that the shared control paradigm can also be expanded on the tactical and strategic layer, e.g., when human and machine work together simultaneously on the guidance or on the navigation (light orange area). An important contrast noted in the paper is between shared control, where human and machine work together simultaneously, and traded control, where human and machine take turns in controlling the task. Note that a combination of shared and traded control is also possible at each task layers, e.g., when the two actors share control all the time, but to different percentages, and trade this different control distribution. An example for this is the cooperative guidance and control scheme “H-Mode” (e.g., Altendorf et al. 2015), where control can be shared and traded between human and co-system in the modes “Tight Rein” (similar to SAE level 1, about 80% human, 20% co-system) and “Loose Rein” (similar to SAE level 2, about 20% human, 80% co-system), and traded in the mode “Secured Rein” (SAE level 3/4, conditionally/highly automated, 100% control by the co-system). This paper postulates that all three control options (shared control, traded control and shared and traded control) fall under the definition of human–machine cooperation, and can be flexibly combined.

Fig. 4
figure 4

Proposed relationship between shared control, shared and cooperative guidance and control, and human–machine cooperation. Extended from Flemisch et al. (2016)

It is important to realize that these three control options can also be combined differently on the layers: for example a shared control scheme on the operational layer, a shared and traded control scheme on the tactical layer, and a traded control scheme on the strategic layer. Figure 5 illustrates different combinations of shared and traded control at two time instances t1 and t2. At the point in time t1, initially the human is not controlling, but influencing the situation on the strategic and tactical layer (dotted line), while sharing the control on the operational layer with the machine (solid line). At t2, the human has traded control completely on the strategic layer with the machine and is completely controlling the strategic layer (solid line), while the machine out of the loop (no line). On the guidance layer, the human has traded control but still keeps some influence (dotted line), and is still sharing control with the machine on the operational layer, but to a different percentage compared to t1.

Fig. 5
figure 5

Example for a combination of traded control, shared and traded control and shared control on different layers

How does this relate to another important paradigm of control, supervisory control (e.g., Sheridan 1976)? In our understanding, supervisory control is a special form of traded control, which can happen on any of the layers. To clarify the differences, Fig. 6 shows an extension of the framework, where the different stages of behavior, perception, cognition and response (as for example described by Wickens) can also be differentiated. It is important to note that the minimum requirement for shared (guidance and) control is that the human and the machine respond together, whereas it is not necessary but often the case that they also perceive together. Cooperation usually goes further, and also includes cases where one of the two partners is only perceiving and processing, and is not yet responding (e.g., as part of a supervisory control scheme) and then trades control from the other partner (e.g., when something dangerous is perceived). It is important to mention here that usually the cognition of the partners are linked directly only via the other two stages, e.g., by observing the response of the partner. Indirectly they can be linked much closer, by common mental models, which has been described as inner compatibility (Flemisch et al. 2008a, b). It is also important to note that in nature—and increasingly in technolog—, perception, response and cognition are closer linked than hinted by this simplified model. An example is haptic interaction: when grasping a table, where the response part is very closely tied to perceiving the environment and the table. It is also worth to notice that parts of the human cognition are closely bound to bodily perception and response, as described in the concept of embodied cognition (see, e.g., Wilson and Foglia 2011).

Fig. 6
figure 6

Four-layer framework for cooperative and shared control, merged with stages of action. It is an extension of the framework towards perception, central processing and action implementation proposed in Flemisch et al. (2016). The arrows indicate that the response of one partner is perceived by the other and in turn influences his responses

Figure 7 shows the general model applied to a specific case of a supervisory control situation. In this case, the human is still involved completely at the strategic task layer, only partly at the tactical and operational task layer with (limited) perception and cognition involved, but where no response part activated. In the automotive domain, there is a wealth of research showing that humans have difficulties to adequately maintain prolonged supervisory control, and that it especially takes time to trade control from supervisory control to manual control. Part of the delay is to move hands or feet back to the control interfaces, but the largest part of the delay actually arises from the time needed to regain situation awareness with respect to the environment. When control is not traded, but haptically shared (Abbink et al. 2012), such delays can be minimized because the driver is then physically linked to the automation’s actions in the environment. Haptic shared control thereby fosters engagement, situation and mode awareness and allows drivers to make use of fast reflexive contributions to control (Abbink et al. 2012, 2018).

Fig. 7
figure 7

An example of supervisory control in the four-layer framework (other constellations of supervisory control are also thinkable)

Another example for the application of this model highlights the impact of experience and expertise human and machine may have in cooperating together (in reference to Rasmussen’s model). If they are used to cooperate together, they have a good model of the other and can adopt a skill-based cooperative behavior. In the example (Fig. 8), the human decides a task allocation without verification or negotiation because he/she trusts machine to perform correctly operational tasks. But in this example, the human has to refer to cooperative rules to decide task allocation at strategic and tactical layers, and the machine which is in a learning phase regarding cooperation cannot decide allocation yet (Habib et al. 2017).

Fig. 8
figure 8

Four layers of cooperation combined with Rasmussen’s levels of skill-based, rule-based and knowledge-based behavior. Green, unlabeled boxes are activities that combine cooperational know-how with information and actions

4 Examples of shared and cooperative control concepts in the framework

To demonstrate what can be done with the framework, a couple of short examples will be presented. In this issue, Altendorf et al. (2019) apply the framework presented above to modeling utility functions for driving with ADAS. In their approach, the authors emphasize that human and machine together form a single system with joint utility, even though each subsystem brings in different norms and values (see Fig. 9).

Fig. 9
figure 9

Framework applied to joint utility (Altendorf et al. 2019)

Moreover, the framework can be applied to analyzing the interaction between human and machine on different levels and layers. Deploying a framework of interaction mediation, Baltzer et al. (2019) present an approach focusing on communication and language between both partners (see Fig. 10).

An integrated view of layers of cooperation, assistance and stages of automation based in this general model is shown in Fig. 11, see also Pacaux-Lemoine and Flemisch (2018) in this issue.

Fig. 10
figure 10

Framework applied to communication between the partners and language (Baltzer et al. 2019)

Fig. 11
figure 11

Framework applied to layers of human–machine cooperation (Pacaux-Lemoine and Flemisch 2018)

Yet another approach presented in this special issue by Weßel et al. (2018) focusses on the concept of self-determined decision making with nudging methods. In this concept, the driver is supported by nudges on all layers of cooperation, after initially authorizing the automation to execute these nudges (see Fig. 12).

Fig. 12
figure 12

Framework applied to self-determined nudging and decision making with nudging methods (Weßel et al. 2018)

5 Outlook: from shared and cooperative control and cooperative automation to a structured design space, use space and effect space of human–machine cooperation

This paper aimed to clarify a number of aspects in the relationship between shared control and human–machine cooperation. We conclude that efforts are worthwhile to conceptually extend shared control towards cooperation at higher task layers (see for an overview, Abbink et al. 2018); and for cooperation to include concepts of shared control at the lowest layers (Pacaux-Lemoine and Itoh 2015; Flemisch et al. 2016). For example, horizontal and vertical extensions of the cooperation concept (Pacaux-Lemoine and Itoh 2015) have been proposed, where horizontal extension concerns cooperation between layers of control, and vertical extension proposes the integration of other functions than action, i.e., information gathering and analysis, and decision making (Parasuraman et al. 2000). Note that one could also include the amount of team-members that are involved, and differentiate between cooperation between a single human and single machine (“vertical” cooperation), and cooperation between different human–machine systems (“horizontal cooperation”, e.g., Flemisch et al. 2014a, b).

The examples above show that a main point in describing cooperative control and shared control is task definition and task decomposition. According to Sheridan (1992) and Inagaki (2003), there are three types of sharing of control or tasks, i.e., relief, extension, and partitioning. Relief is to reduce the human workload, and extension is to enhance the human ability to do a task. Partitioning, on the other hand, is to divide a task into several subtasks. A typical example of task partitioning can be found in car-driving context, i.e., when the driver manages the lateral control and the machine manages the longitudinal control, or vice versa. As a whole, the driving task is shared between human and machine. It is also possible that some of the divided tasks are shared between human and machine. Schmidt (1991) describes a similar direction with three forms of cooperation: integrative, augmentative and debative forms of cooperation. He proposed the integrative form when the ability of agents is different and they have to complement each other to perform task, subtasks or functions. The augmentative form is a form dedicated to agents with the same ability but the workload of one agent is too high and partitioning is requested. With the debative form agents have the same ability and they have to debate to find the best solution. For Millot and Grislin-Le Strugeon, these three generic forms can be combined to describe any cooperative situation (Grislin-Le Strugeon and Millot 1999). In the special issue, for which our paper forms the introduction, an approach described to detect human intentions based on preconditions, group-specific stimulus response characteristics, preparing behavior and initiating behavior is shown (Schneemann and Diedrichs 2019). The proposed models can be used in several ways: for the machine (or observer) to understand the human intention, but also for the human to understand the machine intention, or for the designer and engineer to design the human–machine system in a way that the partners can understand each other’s intentions and react cooperatively.

All these approaches make clear that we are far from defining one homogeneous concept of shared and cooperative control, but rather structure the dimensions of design, use and effects of such systems. This can be described as design space, use and effect space. We hope that this overview paper helps in structuring the discussion around buzzwords and concepts towards a good description and understanding of the design and effect space of shared control and human–machine cooperation. A science and development community seems to need terms such as shared control, cooperative control, adaptive automation or cooperative automation to claim novelty and stake out contributions to the debate. Competition can be helpful temporally to find out alternative and better solutions, but then—in a dialectic approach, e.g., of thesis, anti-thesis and synthesis (described, e.g., by Hegel and Fichte, see, e.g. Stanford 2019)—all these concepts should be cooperatively re-integrated into a common design space, use space and effect space.

The starting point of the discussion of this paper was H. sapiens, for whom in competition with the environment and with other species, human–human cooperation was the key to success or failure. With increasingly capable machines, the next key for success or failure of Homo sapiens will be human–machine cooperation, cooperation on human–machine cooperation, and on its “sharp end” shared control.