1 Introduction

This article presents a methodological approach to analyzing distributed action in the making and applies it to projects of developing human-robot co-work processes. We employ this methodical approach to point out different ways of making robotic labor available for work tasks previously inaccessible to robots. The empirical research was conducted as part of the research project “The social construction of human-robot co-work by means of prototype work settings (SoCoRob)” within the DFG Priority Program 2267 “The digitalization of working worlds. Conceptualizing and capturing a systemic transformation”. According to the priority program, making available is one of three developmental dynamics in the digital transformation (cf. Henke et al. 2018). This article aims at contributing to a better understanding of the dynamics of making available as an empirical phenomenon.

The new forms of making robotic labor available we are interested in, are technically enabled by a new generation of robots, so-called collaborative robots. According to Decker, robots are “a front end of digitalization” (Decker 2022, S. 199, our translation). “When it comes to the digital transformation”, he argues, “robots are a central component because they can manipulate the environment to make changes and because they can navigate themselves and collect data in a special way in the process” (ibid.). This holds especially true for collaborative robots. These robots are capable of working in direct physical interaction with human workers. They no longer need to be placed at a safe distance from humans or to be fenced-off. Instead, the robots’ behavior is adapted to the presence of human workers, using, for example, sensor technology or soft materials. The development of collaborative robots is associated with the hope of being able to open up completely new tasks and domains of application. The basic idea here is collaboration and not substitution. Instead of automating entire workplaces, collaborative robotics aims to support human work tasks by delegating subtasks to robots (Decker et al. 2017). The main domains of application for collaborative robots are currently industrial manufacturing and care work. Our research covers these two domains of application.

Making collaborative robots available for real-world applications, however, is still an ongoing process. In the care work sector, there are up to now few collaborative robots that actually reduce the burden on caregivers (van Aerschot and Parviainen 2020). Particular ideas about how collaborative robots might support care professionals, such as the idea of a beverage serving robot (Schulz-Schaeffer, Wiggert et al. 2023, S. 118), have been repeatedly been subject to care robot development projects for more than a decade but with limited success. Making collaborative robots available for care work tasks is an ongoing process not only in Germany and Europe. James Wright (2023, S. 3) concludes for Japan that “despite considerable hype, lofty expectations, and substantial investment, robots alone cannot yet deliver on the promise of solving care crises in Japan or elsewhere.” Scholarly literature further attributes the low number of applications to the difficulty of coping technically with the complex “ecosystem” of care (van Aerschot and Parviainen 2020, S. 5) and the need to provide care robots with a robot-friendly environment (Lipp 2022, S. 20). The development and diffusion of care robots is a highly discursive field. Strong visions and narratives legitimize and influence the development, testing and application of care robots (Parviainen and Koski 2022, Schulz-Schaeffer, Wiggert et al. 2023).

Industrial collaborative robots (“cobots”), in contrast, are already employed in real-world applications to some extent. But compared to conventional industrial robots, the number of industrial cobots in use is still low (Butollo et al. 2021, Wöllhaf 2020). Only 4.8 % of the 373,000 industrial robots installed worldwide in 2019 were cobotsFootnote 1. Moreover, many cobots are employed in work situations where they work side-by-side with human co-workers rather than in direct collaboration (Huchler 2022, S. 166, Buxbaum and Sen 2021, S. 385). Sabine Pfeiffer (2019, S. 172–173) argues that collaborative applications are difficult to integrate into the established work organization of highly-standardized and taylorized industrial manufacturing, which may explain the low distribution of cobots. Astrid Weiß et al. (2021) shares our interest in analyzing how work tasks are distributed between human workers and robots. They argue that for understanding “how cobots are actually being applied in manufacturing” (2021, S. 340) and how this “affects the sociotechnical work environment” (2021, S. 341), human-robot collaboration “needs to be understood as a complex sociotechnical arrangement, in which agency can no longer be exclusively attributed to humans but is distributed among humans and nonhuman agents” (Weiss et al. 2021, S. 357).

The article is organized as follows: first, we introduce a methodological approach to analyzing how the distribution of work tasks between human workers and collaborative robots is developed, designed, negotiated and eventually established (Sect. 2). Then, we provide comparative analyses of evolving co-work scenarios from two collaborative robot development projects we examined as part of our research. These analyses will reveal three different ways of making robotic labor available for work tasks previously inaccessible to robots (Sect. 3). The final section provides a brief summary and discussion of the results.

2 Analyzing Distributed Action in the Making

This section presents a methodological approach to analyzing how the distribution of work tasks between human workers and collaborative robots is developed, designed, negotiated and eventually established. We suggest a comparative approach (Sect. 2.4) for analyzing co-work scenarios, focusing on prototype scenarios (Sect. 2.2). Our approach is based on the concept of distributed action (Sect. 2.3) and employs script analysis (Sect. 2.1).

2.1 Script Analysis

Introducing collaborative robots into work settings means to re-design established ways of conducting particular work tasks or to develop new ways of conducting work tasks, such that the robot takes over at least a part of what must be done to complete the task in question. As for most technological artifacts, it only makes sense to undertake the effort of designing these robots, training the human workers, and structuring the work settings accordingly, if the robots are intended to contribute to tasks that occur repeatedly. Like most actions that occur repeatedly, these work tasks tend to be conducted similarly each time they are conducted. This is either because the steps of the respective course of action have been explicitly planned at some time, resulting in a plan that guides how to carry out the action. Or this is because a particular practice of how to routinely carry out the action has evolved over time. Often, the structure of repeatedly conducted work tasks is a combination of plan and practice.

For analyzing how the distribution of work tasks between human workers and collaborative robots is developed, designed, negotiated and eventually established, we focus on the underlying plans and practices. To this end, we draw on the script concept as it has been introduced into science and technology studies by Madeleine Akrich (Akrich 1992a, b, Akrich and Latour 1992) and as it has been used by actor-network theory (Akrich and Latour 1992; Latour 1988). The notion of actions being guided by scripts was originally developed in cognitive and social psychology. This strand of research conceives scripts as containing knowledge about particular courses of action: knowledge that represents the typically appropriate ways to act in particular situations and becomes (often tacitly and routinely) activated when actors face the situations in question (Abelson 1981, S. 715, Schank and Abelson 1977, Moskowitz 2005, S. 155, 162–165, 176–177). This is the general notion of script, to which Akrich’s script concept adds the insight that scripts become inscribed in technological artifacts if these artifacts are assigned a particular part to play in carrying out particular courses of action.

Akrich argues that developing and designing technological artifacts is necessarily accompanied by particular ideas about how and to what end these artifacts should be used. In order to define the characteristics of a new technological artifact, developers and designers have to envision the contexts of use of the artifact and the part to be played by the artifact in these contexts. These ideas inform the properties and functions of the artifact (Akrich 1992b, S. 207–208). This is to say that the artifact's developers and designers define the artifact's properties and functions, so that it can take over or support particular parts of the actions for which they intend the artifact to be used. In this way, a part of the script of the envisioned action sequences for which the developers and designers intend the artifact to be used becomes inscribed in the artifact. As material representations of the script, the respective properties and functions of the artifact suggest to its users how and for what purposes it should be used. This is because the artifact will take over parts of actions as intended by its developers and designers only if the action as a whole is carried out according to the script that is partly inscribed in the artifact. In Akrich’s terminology (Akrich and Latour 1992), this means that the script as inscribed in technological artifacts prescribes how the other steps of the action should be conducted to fit together according to the script (Schulz-Schaeffer 2021a, b, c, S. 79–82).

This is not to say that the script as envisioned by the developers and designers and inscribed in the artifacts determines how the artifact will eventually be used, once it has found its place in one or another context of use. In processes of technology development and innovation, the developers and designers of the technological artifacts are not the only relevant group of actors who refer to scripts for organizing actions or are guided by already established scripts. In the contexts of application for which a new technology is envisioned, there may be well-established courses of action already in place that are firmly inscribed in the actors’ practices, in cultural artifacts such as work regulations and rules of procedure or in other technological artifacts. For their part, these inscriptions may prescribe how the new technological artifact would have to perform in order to be useful. Thus, by using the script concept for analyzing how the distribution of work tasks between human workers and collaborative robots is developed, designed, negotiated and eventually established, we trace how and where the different parts of the underlying scripts are inscribed and how competing inscriptions and prescriptions are addressed and negotiated over time (or are ignored or remain unnoticed).

For the purpose of this kind of analysis of the development of distributed actions, we have to keep in mind the wide range of ways in which the scripts can become manifest. A feature or function of a technological artifact can explicitly be designed for carrying out a particular part of a scripted action. Or it tacitly implies preceding or subsequent steps of an action sequence. Similarly, the contributions of the human workers to a scripted action may be explicitly designed and laid out in rules of procedure or instruction sheets. Or they are tacitly manifest as work practices and routines. Moreover, we have to keep in mind that every script contains assumptions about the context of the action in question. Some of these contextual factors will be explicitly addressed by the script’s manifestations (e.g. if carrying out the action depends on certain uncommon conditions that must be ensured), while others are tacitly taken for granted.

2.2 Prototype Scenarios

We find the emerging scripts for the distribution of work tasks between human workers and collaborative robots in projects of developing, testing, and implementing prototypical work settings of human-robot collaboration. Our research focuses on prototypical work settings because this is where new ideas about collaborative work settings are first realized in time and space. Conceptually, we conceive these prototypical work settings as prototype scenarios. Prototype scenarios are a particular manifestation of situational scenarios. According to our definition,

“situational scenarios are images of the future that in some detail specify for envisaged typical situations of use of an imagined new technology how the components of these situations would (or might) interact. These components include not only the imagined new technology with its features and the envisaged users with their interests, preferences and capabilities, but also other people, objects or structures of relevance for the situation. Situational scenarios provide descriptions of the interactions between the components included. They focus attention to the causal relationships one would have to take into account, if the scenario was reality” (Schulz-Schaeffer and Meister 2019, S. 40, cf. Schulz-Schaeffer and Meister 2017, S. 198, 2015, S. 166).

Prototype scenarios are prototypically realized situational scenarios. Prototype scenarios occur when innovators translate their ideas about future situations of how a particular new technology should be designed and employed for particular purposes in particular contexts—i. e. their situational scenarios—into prototypes of the new technology and into partial physical realizations of the envisioned context of use (Schulz-Schaeffer and Meister 2019, S. 40, 44–45, 2017, S. 204). Often, prototype scenarios at first occur in research laboratories and go through different stages of elaboration until they turn into real-world applications.

When collaborative robots are designed based on situational scenarios, they are to some extent designed based on scripts. This is because situational scenarios and their prototypical realizations are representations of how the scenarios’ technical and social components are supposed to interact with one another. The part of the script that defines the robots’ contributions becomes inscribed in the technology. This is to say that some of the robot’s features and functionalities are specifically designed according to the technology’s intended contributions to the course of action, thus representing the robot’s script. To some extent, the robot’s script prescribes to the human co-workers how to conduct their part of the work task and how to coordinate with the robots’ behavior in order to accomplish the respective work task. The same applies in the opposite direction with the inscriptions represented by work descriptions and work routines of the human workers and related prescriptions of what a fitting behavior of the robot should look like. Script analysis provides the basis for reconstructing in detail how work tasks are distributed between human and robot co-workers and how the collaborative work settings are modified and become stabilized during the development processes. Script analysis also provides the basis for identifying the human workers’ informal and tacit contributions, which are required even in the most formalized and automated work processes, as many empirical studies in the field of the sociology of work have shown (Böhle and Milkau 1989, Funken and Schulz-Schaeffer 2008; Pfeiffer 2016).

2.3 The Concept of Distributed Action

For analyzing how the collaboration between the robotic and the human co-workers is de-signed, contested, negotiated, re-designed, established or discarded in the process of developing, testing, and implementing prototype work settings of human-robot co-work, we employ the concept of distributed action (Rammert and Schulz-Schaeffer 2002, Schulz-Schaeffer and Rammert 2023). The concept builds on, but goes beyond, actor-network theory by introducing the notion of gradual action (Rammert and Schulz-Schaeffer 2002, S. 44). In recent publications, we have elaborated on the notion of gradual action by distinguishing between an effective, a regulative and an intentional dimension of agency (Schulz-Schaeffer 2019, 2023, Schulz-Schaeffer and Rammert 2023). The effective dimension covers the ability to bring about the changes necessary to achieve the goal of the action; the regulative dimension concerns the control over the execution of the action; and the intentional dimension is about owning the goals. For analyzing human-robot collaboration, it is particularly important that the regulative dimension includes two kinds of control: Control in the sense of steering (“Handlungssteuerung”) is the kind of control that orients the activities toward an underlying plan. Control in the sense of monitoring (“Handlungskontrolle”) is the ability to recognize the conformity or deviation of plan and actual performance and to intervene in case of deviation (Schulz-Schaeffer 2023, S. 18–19). For any constellations of distributed action, the concept of distributed action allows us to precisely describe in which way and to what extent activities are distributed between human and robot co-workers (Meister and Schulz-Schaeffer 2021a, b, c, Schulz-Schaeffer, Meister, et al. 2023).

2.4 Comparative Analysis

We analyze prototypical scenarios of human-robot collaboration and their underlying scripts in order to learn about possible new ways of distributing tasks between human workers and collaborative robots. We are especially interested in new forms of collaboration that make robotic labor available for work tasks that previously could be carried out only by human workers. Since we are interested in understanding how work collaboration changes when collaborative robots come on the scene, we need to employ a comparative approach. The general descriptive rule of a comparative approach to understanding the agency of technological artifacts is, as Bruno Latour (1988, S. 299) has put it:

“every time you want to know what a nonhuman does, simply imagine what other humans or other nonhumans would have to do were this character not present. This imaginary substitution exactly sizes up the role, or function, of this little figure.”

We can hope to learn even more about the agency of technological artifacts if our comparisons do not only refer to alternative settings imagined by the researcher but to alternative settings that actually exist (or have existed) and can be studied empirically. Our empirical data allows us to conduct comparisons of this kind and provides us with different options for doing so: A substantial part of our case studies are about collaborative work scenarios in which the collaborative robot takes over parts of a particular course of action, which previously have been carried out by human workers. For these cases, we use the information we gathered about how the work task was carried out before the introduction of the robot for the purpose of comparison. In some of our case studies, however, the prototype scenarios contain a human-robot collaboration for a new work task and there is no previous setting that can be used for comparison. In such cases, we have to rely on indirect comparisons, which are not necessarily less informative, however. If there are previous work settings for similar work tasks on which the collaborative scenario is based, they may serve as a comparison. Or the engineers have come up with more than one scenario depicting how to deal with the new work task so that the human-robot collaboration can be compared with these other scenarios of distributing the work task.

In addition to comparison with previous (or alternative) ways of organizing and distributing the work tasks subject to the collaborative work scenarios we study, we also compare different manifestations of these scenarios as they evolve over time. Before a situational scenario becomes realized prototypically, it exists as an idea about a new way to employ a collaborative robot for a collaborative work task. To the extent that building prototype scenarios is a collective effort—which usually is the case: it is usually part of projects with the goal of developing and (ultimately) implementing new technological solutions—these ideas need to be communicated and discussed and thus they are given symbolic expression: in written descriptions, sketches, or verbal communication. In our empirical data, the main source for this early stage of the collaborative scenarios are the verbal accounts of our interview partners. During its prototypical realization, a scenario goes through different stages of elaboration. A first prototypical realization in a laboratory setting may focus on demonstrating that the collaboration “in principle” works as expected. Later versions, however, will have increasingly to take into account the real-world conditions of the intended contexts of application. We regard the unfolding elaboration of the prototype scenarios as a process of negotiation in which the evolving human-robot collaboration takes shape (Schulz-Schaeffer and Meister 2019, S. 44–52). Comparisons between the prototype scenarios at different stages of elaboration is therefore an important part of our comparative approach.

For the different versions of the scenarios, we analyze the respective scripts that define how the work task shall be carried out, how it is divided into individual work steps, and how the work steps interact with each other to form the distributed action as a whole. We determine for each individual work step how it is assigned to human and robotic co-workers and what kind of effective, regulative, and intentional agency is required from and assumed to be possessed by the respective agent. The analysis of the scenarios provides us with detailed descriptions of the envisioned distribution of work. Moreover, it allows us to investigate in detail how different parts of the script are inscribed in different ways and which complementary or competing prescriptions they imply. The comparison between different versions of the scenarios informs us about inscriptions and prescriptions that become subject to change and about gaps in the scripts. This allows us to reconstruct in detail what becomes subject to negotiation during the development process.

For the purpose of comparison, we build synopses of the different versions of a scenario for every single empirical case. We build them as spreadsheets in which the columns represent—for each version or manifestation of the scenario—the respective action sequence for carrying out the work task. Within the columns, we use the rows of the spreadsheet to list sequentially the individual work steps for carrying out the work task. This notation allows us not only to easily identify changing assignment and responsibility of individual work steps, but also how individual work steps are added, are modified or disappear in the process of designing the work task as a distributed action and how the interaction between the individual work steps changes (see Tables 1 and 2).

Table 1 Actions steps of the sequence “Filling the beverages with water” (CP = Care Professional)
Table 2 Action steps of the sequence “Distributing the beverages to residents” (CP = Care Professional)

3 Comparing Human-Robot Co-Work Scenarios

This section provides comparative analyses of evolving co-work scenarios from two collaborative robot development projects we investigated as part of our research. These analyses will reveal three different ways of making robotic labor available for work tasks previously inaccessible to robots: by redistribution of work steps to human workers and by redefinition of work tasks as exemplified in the project developing a beverage serving robot (Sect. 3.1); and by identifying the collaborative robot as the best suited worker for the task as exemplified in the headlight and fog light alignment project (Sect. 3.2).

3.1 Serving Beverages at a Care Facility

Serving beverages to the residents in care facilities and getting them to drink is a daily task for care professionalsFootnote 2. To do so repeatedly is important especially for dementia patients to avoid dehydration. Delegating this task to care robots is an application scenario that goes back more than 20 years and has played a significant role in developing and deploying care robots up to today (Schulz-Schaeffer, Wiggert, et al. 2023). Besides other scenarios, one of the care robot development projects we investigated is about such a beverage serving scenario. The project itself was initiated by the care manager and the head nurse of a care facility who—after seeing a care robot at a nursing fare—approached the manufacturer of that particular care robot. This robot, so the company’s aim, is supposed to be able to conduct several care-related tasks, beverage serving being one of them. The company’s idea is to provide these tasks as standardized modules, which would need to be customized only to a minimal extent on-site. Together with the company, the care manager fostered a consortium of different actors, including partners from academia as well as a second care facility—the wife of one of the professors from the academic partners worked there at the time. Together, they applied for project funding from an interregional fund and started a two-year trial at the two participating care facilities, integrating different functionalities of the robot. The partners’ goal was to gain more experience with robots in contexts of care. The manufacturer wanted to further develop its robot’s functionalities. The second care facility is the only one in which the beverage serving scenario was used.

At an early stage of the project, one of the manufacturer’s user-experience managers got together with the management of the care facility with which the robot manufacturer cooperated for this project to discuss which tasks from the robot’s current repertoire they would like to implement in the facility and where. Among other tasks, they decided on the delivery of beverages. The facility’s management chose the dementia unit as the first area to deploy the robot, as the unit manager is considered very tech-savvy and “likes to do stuff like that” (PR01 CPR01 Care unit manager #00:53:31#, own translation). From then on, the user-experience manager consulted mainly with the head of the dementia unit, who was considered by the manufacturer as a so-called “superuser” of the robot, and whose task was to mediate between the care staff and the user-experience manager, as well as, later on, to motivate the staff and residents on continuing to collaborate or interact with the robot (PR01 CPR01 Care unit manager #01:00:07#).

Based on surveys of care professionals conducted by the company’s user-experience unit, the company’s engineers began to develop a general scenario of how the robot should conduct the beverage serving task. Their mental model of this scenario is as follows: A care professional instructs the robot to begin the task either via a previously defined calendar entry using a tablet or via a verbal instruction. The robot then navigates autonomously to a prior specified location from where it can fetch the beverages: usually the kitchen of the care unit. After arriving at the kitchen, the robot identifies the location from where to get the beverages using its vision system. If there is a water-filling machine in use, the robot independently operates the machine to fill water into cups. The robot is also able to locate the individual cups, to grasp them, and to place them on a tray it carries. With the water cups on the tray, it then navigates to the day room. There, the robot localizes by posture recognition the residents who are currently in the room. It safely navigates to the single residents and offers them a beverage by taking a cup from the tray and placing it on the table in front of the resident. It also verbally motivates the residents to drink. For a future stage of development, the engineers imagine the robot even monitoring whether the residents drink the water or not.

As is often the case with care robot development projects, the first prototypical realization of this scenario took place in a real-world environment. In order to adapt the beverage serving scenario to the facility’s particular conditions, the manufacturer’s engineers inspected the dementia unit accompanied by the head of the dementia unit as well as additional care professionals and technicians from the care facility. In the process of getting familiar with the local conditions and of learning how the robot would have to adapt to them, the engineers made several observations that led them to change parts of the script of the beverage serving task as defined in their mental model. We take a closer look at two of them in the following sections.

3.1.1 Redistributing Work Steps to Make Robotic Labor Available

The first change we want to discuss concerns the action steps of filling and/or fetching the water cups in the kitchen. The care professionals at the dementia unit use a semi-automatic machine to fill the water cups (see Table 1, step 1) and they use special cups with drinking lids to minimize spillage when drinking. The engineer concluded that the gripper with which the manufacturer’s robot was equipped would not allow it to operate the machine and that the special cups could not be easily detected by the robot’s vision system due to their shape and texture. As the robot would neither be able to fill the cups itself, nor to take already filled cups from the kitchen counter within this setting, changes of the initial beverage serving script were necessary. The new solution was negotiated and agreed upon by all parties involved in the project, the engineers as well as the care professionals. The resulting script now involves a care professional who fills the cups with water once the robot arrives at the kitchen and verbally asks for filled cups. The care professional also places the cups on the robot’s tray. Upon confirmation that the filled cups are in place, the robot can start its journey to the day room (see Table 1, step 5). Compared with the script of the mental model, the revised script redistributes some work steps to the care professionals; compared with the care professionals’ previous practice, however, the robot still relieves them of several work steps. Thus, though the care professionals are not completely happy with this solution, it is acceptable to them because the rationale of reducing care professionals’ work burden still seems to be intact.

In the domain of care robots, redistributions of this kind occur quite often when imagined scenarios of human-robot co-work are translated into prototype scenarios situated in real-world environments. The crucial question, then, is if the resulting human-robot collaboration still makes sense: i.e. if it still relieves the human workers of work tasks and, hence, if it actually makes robotic labor available for work settings that have previously been inaccessible to it. As is the case in our example, there is often a fine line between preserving and destroying the scenarios’ labor-saving capacities when such redistributions take place.

In our case, this line is drawn between two options of how to make sure that there is a care professional in the kitchen who supplies the robot with the beverages. According to the mental model, there would be two options for scheduling: either spontaneously by verbal instruction or by calendar entry, which requires preplanning. Voice recognition turned out to be too unreliable in the context of application. Thus, the engineers opted for the calendar function. Knowing that the care professionals usually distribute beverages around 3:30 pm, they set 3:30 pm as the time for the robot to start moving to the kitchen (PR01 CPR01 Care unit manager #00:49:41#). Without the option to spontaneously coordinate when to supply the robot with the water cups, the care professionals lose the flexibility of their previous work practice, which allowed them to deal with serving beverages sometime around 3:30 pm when it fits into their work schedule. Instead, one of them needs to be in the kitchen at the predefined time. This loss of flexibility is enough to turn the beverage serving scenario into a setting that involves additional work for the care professionals instead of being labor-saving.

3.1.2 Redefining the Work Task to Make Robotic Labor Available

The second change we want to discuss concerns the work steps of distributing the beverages in the day room. According to the mental model, the robot is able to identify people sitting in the day room and to go to them to serve them the beverages. The robot to be employed for the task is in principle capable of recognizing people. The engineers had trained the robot’s posture recognition algorithm prior to the implementation process accordingly. Recognizing residents in the day room was supposed to be part of the robot’s script in the negotiated prototype scenario. Nevertheless, it turned out that in the real environment of the day room of the dementia unit the robot was not able to recognize people reliably and could therefore not navigate towards them successfully.

The engineers responded to this problem with an ad hoc solution. They put a label on the table in the day room where residents often sit. With the help of the label, the robot is able to navigate to this table. Once at the table, the next steps for the robot are to measure its distance to and the height of the table surface, to grasp the cups one by one from its tray and place them next to each other on the edge of the table (PR01 ENG01 Software engineer #00:09:54#). Then the robot is to say: “Please drink. Drinking is healthy.” (see Table 2, steps 2–4).

In devising this solution, the engineers substantially rewrote the robot’s script without considering the consequences of these changes or discussing them with the care professionals or other involved parties. Most importantly, the robot is now no longer actually serving beverages to residents. Rather, it places cups on a table and does so whether or not there are residents who could take them and hear the robot’s invitation to drink. Thus, it is now up to the care professionals to restore the purpose of the entire action by distributing the cups to the residents themselves or by moving the residents to the “robot table” (PR01 CPR02 Care facility manager #00:22:27#, see Table 2, step 5). It is also now up to them to check whether the residents actually drink some water (see Table 2, step 6).

Obviously, these changes of the robot’s script do not make much sense if the goal is to make robotic labor available for the work task of serving beverages. The redistribution of work tasks to care professionals, which we discussed in the previous section, suggests that robotic labor is not yet as available for human-robot collaboration as is often envisioned at the beginning of care robot development projects. But even if redistribution reduces the robotic contributions, as discussed above, the robot’s remaining contributions may be sufficient for human-robot collaboration to still make sense. The changes we are observing here, however, are not about redistributing work tasks within a defined course of action but rather about redefining the action from the perspective of what the robot is capable of contributing: i.e. in changing the robot’s script, as described above, the engineers prioritized the criterion that a work step can be successfully carried out by the robot over the criterion that this work step contributes in a meaningful way to the work task it is part of.

We regard this also as an attempt to make robotic labor available. In the short term, this is obviously a misguided way of integrating robots into collaborative work settings. Viewed in the long run, however, this rather dysfunctional first implementation may serve as the starting point for improvements from which eventually a co-work scenario emerges that actually does make sense. This is the rationale behind the engineers’ ad hoc solution. Following this rationale, however, the engineers implicitly turned what was initially conceived as a real-world application by the staff of the care facility into an experimental laboratory setting. Over time, this became apparent to the care professionals participating in the project: “It was really more work for us. I think about from the middle of the project it was just a way of supporting the company. We just wanted to still support the project, but we no longer claimed that the robot should support us in any way.” (PR01 CPR02 Care facility manager #00:06:05#, own translation).

3.2 Alignment of Headlights and Fog Lights in Automotive Final Assembly

Alignment of headlights—a safety requirement for car manufacturers (ISO 303:2002)—is a well-established work task in final assembly at the Ford plant in Saarlouis. The decision also to align the fog lights in this production step is the starting point for developing the co-work scenario we are discussing in this section. Previously, the fog lights were pre-aligned by the manufacturer of the fog lights. Responding to “higher requirements of the customers on street lighting concepts” (IR01 PL01-1 Project leader, #00:21:01-0#, own translation), the product development team decided to additionally align the fog lights as part of the assembly process.

For solving the problem of how to integrate the fog light alignment in the existing production step, the manufacturing department formed a task force. The task force consisted of manufacturing engineers and process planners from Ford Saarlouis and from other production sites. The task force considered different options and developed alternative scenarios, which they evaluated in detail in concept studies. As a result of the evaluations, they decided to employ collaborative robots. The team then was expanded by robotic programmers from the robot-arm manufacturer and by experts from the manufacturer of the car-lights alignment station. The programmers supported the team with programming and the installation of the cobot. The manufacturer of the car-lights alignment station contributed its expertise in the alignment of the fog lights. After developing the solution, the team conducted a feasibility study and then implemented the resulting co-work scenario successfully at the production plant. The feasibility study was carried out on the premises of the manufacturer of the car-light alignment station. Having a headlight alignment station with integrated fog light alignment in its portfolio was an additional benefit of this cooperation for the alignment station manufacturer.

3.2.1 An Open-Ended Search Within a Well-Defined Search Space

The previous process of aligning the headlights provides the frame of reference for the alternative scenarios of additionally aligning the fog lights, because these all are about how to integrate the additional task into this work cycle. It takes place at an alignment station, consisting of a platform that allows to position the vehicle precisely and a moveable light collection box for measuring the light values of the headlights. For each headlight there are two aligning screws (for vertical and horizontal alignment) under the hood of the car accessible from above. A worker inserts an adjustment tool in each of the screws. These tools then automatically adjust the screws based on the data from the light collection box. This work step is repeated for the other headlight. The time set for this work cycle is 120 seconds (IR01 P01-1 Project leader #00:21:53-0#).

One of the defining factors narrowing down possible ways of integrating the fog light alignment into this process is the position of the respective alignment screws. Due to restrictions imposed by the design of the vehicle’s engine compartment, they were located below the main headlights, accessible through a horizontal screw channel. It was immediately clear to the project manager responsible for the redesign of the alignment process that for ergonomic reasons no worker could perform the task of inserting an adjustment tool in this screw (IR01 PL01-1 Project leader #00:35:00-0#). Indeed, there is an ergonomics guideline that prohibits constant bending in industry (DIN EN 614-1:2009-06). The option of assigning this task to a conventional industrial robot was excluded by a limitation of the existing alignment station: requiring protective fences, such robots would not have fitted into the available space (IR01 PL01-2 Project leader #00:16:07-4#).

With the solution space thus already narrowed down, the task force came up with the option of building a lift to raise the car or to descend the worker into the floor, so that the worker could reach the alignment screw in a standing position. Solving the ergonomic problem by introducing a lift, however, creates incompatibilities with other attributes of the alignment process. Raising and lowering the lifting platform would take too much time given the 120 seconds timeframe of the work cycle. Thus, this solution would have “made no sense at all in terms of cycle time” (IR01 PL01-1 Project leader #00:21:01-0#, own translation). Moreover, it would be difficult to accommodate the spatial limitations with the lift requiring a safety area around it (IR01 PL01-2 Project leader #00:16:07-4#). Finally, it would be challenging to implement this setting in the four-week time-slot provided by the production plant’s summer break.

Using a collaborative robot for adjusting the fog lights was the next option the task force team evaluated. According to their concept study, this solution promised to have the following advantages over the previously considered options: It would accommodate the specified time of 120 seconds via simultaneous alignment of the fog lights by the robot and of the headlights by a factory worker. There would be no non-ergonomic work tasks. Moreover, this solution would accommodate the work station’s spatial limitations, since collaborative robots require no safety fences. Finally, the acquisition costs for the collaborative robot were estimated to be lower compared with conventional industrial robots (IR01 PL01-1 Project leader #00:24:37-0#). The task force thus decided to pursue this solution and to conduct a feasibility study.

For the feasibility study, a copy of the previous calibration station but supplemented with collaborative robots was built in a laboratory. Similar to the human worker, the task assigned to the robots is to insert the adjustment tool in the adjustment screw, to hold it there while the adjustment tool automatically adjusts the fog light using the measurement data from the light collecting box, and to remove the tool afterwards. In the scenario, which was thoroughly tested during the feasibility study, there is a collaborative robot for each of the fog lights but only one worker for both headlights. Their work is coordinated such that while the worker adjusts the headlight on the right side, the left-side robot adjusts the fog light on the left side and vice versa. The feasibility study revealed some minor problems, which the team was able to solve by making small adjustments. Afterwards, an exact copy of the work station and the associated co-work process was built at the Ford plant in Saarlouis to be used as part of the final assembly. Again, the team had to undertake only a few adjustments to make the scenario work under the real-world conditions of the production plant and were able to implement the new setting during the plant’s four-week summer break.

3.2.2 How Preconditions of the Context of Application make the Collaborative Robot the most Suitable Solution

A main characteristic of this scenario development process is that most of the decisions leading to the work process as it was finally implemented at the production plant were made during the concept studies that resulted in the first prototype scenario. Once this first prototype scenario was built for the feasibility study, only minor adjustments were required. For instance, it turned out during the feasibility tests that due to millimetric geometry differences of the car, the distance between the robot and the car had to be measured each time for the robots to find the screw hole. Consequently, the engineers equipped the robots with distant meters. Or it turned out during the implementation process that the light conditions at the production plant required the screw hole to be illuminated separately to be reliably detectable by the robots.

What explains this striking difference from the development of the beverage serving scenario, in which the confrontation with real-world conditions during the prototypical realization of the scenario required the robot’s and the human workers’ scripts to be substantially revisited and renegotiated? To answer this question, we have to look at what guided the task force members in their evaluations of the alternative scenarios they were considering. As it turns out, their evaluations and the design decisions they derived from them were largely guided by fixed and uncontested preconditions of the context of application at different levels (see Table 3). Some of these are conditions predefined by the industrial settings, such as the ergonomic rules and safety standards or the economic rationale of preferring a less expensive solution over a more expensive one. Others are conditions predefined by the existing headlight alignment process and conditions of the production process it is part of: the 120 seconds timeframe of the alignment work cycle, the position of the fog light adjustment screw, the set-up of the alignment station including the spatial limitations, and the four-week time slot for implementing the new calibration station and the new work process. But in contrast to the many preconditions of the context of application which the developers of the beverage serving scenario had to face as well, these preconditions did not just pose problems but at the same time pointed the way to their solution. How is this possible?

Table 3 How the alternative scenarios considered accommodate the fixed and uncontested preconditions

The answer consists of three parts: First, the decision to insert fog light alignment as an additional work task into a preexisting work process at an already established work station means that there was an already established work scenario with a corresponding script, which indicated how the worker and the alignment station’s technology interact and which prescribed, to some extent, how the additional task could or could not be included. Since the development team took the existing setup as a given, it is clear that the design of the new work task had to adapt to the already established work scenario. This considerably reduced the design options and thus the need to negotiate the new co-work scenario. Second, for many of the questions that were still open for discussion, there was no need for direct negotiation between the groups of actors relevant for establishing and running the new work process, because there were objectifications that represented their interests and positions: ergonomic guidelines, safety standards, costs, and time frames. Third, in contrast to the beverage serving scenario, the goal of the development team was not to find a way to employ a collaborative robot in a somehow meaningful way, but to find a way for the new work task to be integrated into the existing work process with whatever means are suitable. As it turned out, this was by assigning the task to collaborative robots.

Thus, in this case, making robots available for work tasks previously inaccessible to industrial robots does not follow the logic of promoting collaborative robots by looking for problems for which they may provide a solution. Rather, it follows the logic of finding a solution to a given problem. The collaborative robots were made available for the alignment task, because as a result of an open-ended search they were considered to be more suitable for the task than humans or conventional industrial robots.

4 Conclusion

This article introduced a methodological approach to analyzing distributed action in the making and applied it to analyzing different ways of making robotic labor available for work tasks previously inaccessible to robots. To understand how ideas about distributing work tasks between human and artificial workers are developed, evolve over time and become eventually implemented (or not), we suggest conducting detailed comparisons between different manifestations of the scenarios used by engineers and other parties involved to express and elaborate on these ideas. Especially important in this respect are prototype scenarios, because it is in these scenarios that the new ideas are first realized in time and space. The approach is based on the concept of distributed action (Rammert and Schulz-Schaeffer 2002, Schulz-Schaeffer and Rammert 2023, Schulz-Schaeffer 2019, 2023), which suggests a symmetrical way of analyzing the contributions of humans and technological artifacts to actions and which, at the same time, preserves a differentiated sociological concept of action. For any constellation of distributed action, it thus allows us to precisely describe in what way and to what extent activities are distributed between human and robot co-workers. In analyzing the different manifestations of the scenarios, we focus on the underlying scripts that inform us about how the distribution of work tasks becomes inscribed in and prescribed to technology and work practices.

In analyzing the project of developing a beverage serving robot, we illustrated how to use this methodological approach for detailed analysis at the level of the single work steps of sequences of distributed action. The analysis revealed two different ways of making robotic labor available for work tasks previously inaccessible to robots: by redistributing work steps to human workers and by redefining work tasks. The project of developing a solution for fog light alignment represents a third way of making robotic labor available: by identifying the collaborative robot as the most well-suited worker for the task.

In two respects, both projects represent contrasting cases. The first respect concerns how the distribution of work between humans and robots is negotiated. In the beverage serving case, direct negotiations between the engineers and representatives from the context of application are highly important. In the fog light alignment case, in contrast, there is little need for direct negotiation between all actors involved in and affected by the redesigned work process, because most design decisions are predefined by givens: preconditions of the work setting and objectified rules and standards. Second, the beverage serving case represents a technology-driven approach, while the fog light alignment case represents a problem-centered approach. This to say that the beverage serving project is about finding useful ways to employ collaborative robots. The fog light alignment project, in contrast, is about finding a solution with whatever means are suitable. It is thus somewhat ironic that it is this project which actually makes robotic labor available in a real-world application—and not the project that explicitly promotes the use of collaborative robots.