Production concepts and technology developments concerning automation, Cyber-Physical Systems (CPS), Industry 4.0 (I40), and Internet of Things (IoT) are addressing fully autonomous systems, fostered by an increase in available technologies regarding distributed decision-making, sensors, and actuators for robotics systems [1,2,3]. For specific production logistics settings with a multitude of transport tasks, e.g., between warehousing or material supply stations and production locations within larger production sites as for example in the automotive industry, this also has important implications [4,5,6,7,8]. In most cases, mixed environments where automated systems and humans collaborate (e.g., cobots) are not in the center of such analysis and development endeavors. From an interdisciplinary research perspective, this constitutes an important research gap, as the future challenges for successful automated systems will rely on human-computer interaction (HCI) and an efficient, successful collaboration between competent workers and automated robotics and transportation systems [9,10,11,12].

We derive a HCI efficiency description for production logistics based on an interdisciplinary analysis consisting out of three interdependent parts: (i) a production logistics literature review and process study, (ii) a computer science literature review and simulation for an existing decentral autonomous traffic control algorithm applicable to production logistics settings with the specific adaption to HCI, and (iii) a work science analysis for automation settings referring to theoretical foundations and empirical findings regarding the management of workers in digitalized work settings. A conceptual synthesis is deriving a generalized concept for production logistics regarding HCI for future research and business applications from these inputs. The three applied approaches are determined to analyze the crucial role of human interaction in automated environments in production logistics and I40 settings. The methods stem from different disciplines as successful automation concepts also have to consider computer sciences, economics, and work science perspectives. Existing research contributions are mainly addressing technical aspects and automation concepts as solely computer science optimization aspects. Overall feasible and sustainable concepts for automated production, e.g., within production transport, will only work out if the human factor is included as for a long time to come production environments will be mixed settings of robotics and human workers [13,14,15]. For such an interdisciplinary analysis and concept development, we address the question of human intuition and its development within a digitalized production logistics setting as well as automated algorithm reaction to human actions is newly included into the analysis.

The specific contribution of this paper is to emphasize the value of an interdisciplinary approach to HCI settings in production logistics. This is represented by the objective to derive a HCI efficiency description for production logistics based on the three areas of production logistics management, computer science, and work science. This is enhanced by a production logistics traffic control problem exemplified in a simulation and including human as well as automated robot actors, outlining also a hybrid decision model with human-robot interaction.

This paper is structured as follows: Section 2 is outlining a literature review as well as a process study regarding production logistics as a preamble for questions of automation and HCI in production. Section 3 is presenting a computer science literature review for HCI as well as a simulation addressing a typical production logistics traffic coordination problem like, e.g., for automated and human-operated forklift trucks in a larger production hall complex. Adding on that, Sect. 4 is outlining the state of the art in HCI from a work science and human resources perspective, providing further inputs towards a comprehensive HCI model explained in detail in Sect. 5. Finally, Sect. 6 is describing a conclusion and research outlook regarding HCI in production logistics.

Automation and HCI in production logistics

State-of-the-art and current developments

Automation and application of artificial intelligence (AI) are a rife topic in production, transportation, and logistics—from autonomous cars and trucks to ergonomic enhancements of workers in production process handling and picking for example [16,17,18,19,20]. Manufacturing and production management changed over the last decades, e.g., with the advent of cheap sensors and actuators communicating within the Internet, enabling the real-time connection between systems, materials, machines, tools, workers, customers, and products as the IoT [3, 21, 22]. The resulting data volume (Big Data) generated by all connected objects represents the new raw material of our time, changing many business models and environments. IoT allows for a production paradigm with digital customer involvement from the development and design phase [7, 23,24,25]. At the same time, market demand requires high volumes of individual products, bearing on I40 with a transformation of industrial production by merging the Internet and information and communication technologies with traditional physical manufacturing processes [20, 26, 27].

With I40, new smart factories grounded on the manufacturing and assembly process digitalization are expected. Such factories are characterized by a high production flexibility through the use of reconfigurable machines that enable a personalized production in batches as small as lot size one [28, 29]. A remarkable opportunity to target these goals is the development of a brand-new generation of manufacturing and assembly systems implementing the I40 principles to production processes (Fig. 1).

Fig. 1
figure 1

Infographics regarding important trends

Besides that, complex production systems and strategies like, e.g., make-to-order, require advanced supply and logistics concepts as high-volume stock keeping is not feasible for such production strategies and customer demands regarding quality and delivery time have to be met nevertheless [20, 30, 31]. Especially the automotive industry has been a strong leader and innovator in applying new automation and enhancement technologies. This has been true for logistics concepts like just-in-time or just-in-sequence in the past and is still true today for innovations such as I40 and CPS [25,26,27, 32, 33].

In order to shed light on the actual business practice developments in production logistics as well as challenges and hurdles in automation as requested by I40 concepts, the following section is outlining a current process study of a mid-size firm. The inputs are also essential in terms of the derived HCI optimization model in the later part of the paper.

Example case

The example case provided here is addressing a mid-size logistics company. The company has about 200 employees and is strongly innovating company processes, especially in the areas of order picking and logistics. The company is involved in many production logistics settings for larger companies in retail and manufacturing and has a focus on automation as workers still handle heavy materials and products. On a site area of 40,000 m2, the workers perform typical production preparation processes like re- and co-packing. Usually, a wide variety of specific product and process requirements are demanding high qualification levels of the included workers. For example, they have to keep meticulously the required quantity and quality as well as scheduling and sequence profiles. In order to allow for comprehensive tracking and tracing of all activities, they use support systems to document all movements and actions, e.g., with barcode scanning or voice approval of used parts. Moreover, they rely on fully automated processes, e.g., for securing transport items within the site and towards other production areas. Furthermore, the company applies automation in order to safeguard workers regarding hard working conditions, with high ergonomic challenges especially for the back during lifting activities as well as during repeated monotonous handling activities performed many thousand times a day. Automation and robot support in this area is also helping the company to enlarge the potential worker pool as human resources are scarce and there are not many people willing to work in production logistics. Therefore, this study reports and discusses the hurdles and experiences with additional automation steps and concepts, also in order to derive further insights for a generalized HCI model in production logistics.

As specific hurdles for automation steps and implementations, we identify the following items regarding the included workers: (i) processes and technology: the company tested several automation technologies together with qualified suppliers and changed the addressed processes accordingly. This included height adjustment for manual processes, automated lifting devices as well as automated transportation items. Although they implemented an intensive change management with integration of workers and process analysis and adaption in many cases, the workers used automated support only for a short time before traditional behavior and processes settled in again: though initial acceptance and use were high, later application after some week declined rapidly. (ii) Acceptance: although the company did not apply automation throughout or followed through, acceptance and motivation for optimization in general with workers increased. This was an important lesson learned as they integrated workers in projects and feedback loops regarding production logistics process optimization. (iii) Competence: the company recognized in addition, that workers—though highly motivated—were not adequately qualified, e.g., regarding ergonomics in automation steps or decision speed and competence. Therefore, workers structurally underestimated long-term health and productivity effects of automation investments, though especially health support and improvement are in their own vital interest.

Altogether, the example process described shows that including workers in the change management process, allowing for an extended test and implementation period as well as top management commitment is crucial for successful automation. This is in line with traditional change management concepts. In addition to these internal factors, external factors such as customer requirements and technical or market influences are important for automation changes and have to be managed and included sensibly too. For example, the range of required variants and sequences from the demand side may well restrict the technically feasible automation options available, which is a crucial result before entering expensive change processes towards automation.

As benefits of automation steps and implementations, the example case reveals following points regarding the included workers: (i) motivation: automation and technology application can enhance the engagement and motivation of workers considerably. This is explained due to general technology advance also in private contexts and workers are on average very motivated to support and experience such developments, most of the times improving their own knowledge and capabilities. (ii) Work environment: technical advances can also be used to improve the general work environment, e.g., in order to increase the attractiveness of logistics jobs and to recruit new personnel. (iii) Simplicity: simple solutions are superior to more complex variations as the feasibility for workers to comprehend those changes is higher. This is for example the case with modern-day barcode scanners, where handheld models without cables are available but the process would be more complex due to the risk of mistakes and wrong scheduling.

In total, the bottom line of the specific company experiences with automation steps can be formulated as follows: hurdles and benefits are often addressing the same questions or areas like for example worker motivation and acceptance. The time perspective is important, meaning allowing for long test phases and process adjustments before a final steady state implementation is planned and expected—and also the time horizon for expected benefits as, e.g., regarding ergonomic health improvements or decision time improvements are obvious only after a certain period and still enhance overall productivity in production logistics significantly. Therefore, positive worker acceptance in HCI contexts can be expected from this business practice experience if workers are competent to understand changes, can see short- and long-term benefits, and solutions are kept simple where feasible.

In most production settings, solutions are connected to the state-of-the-art in computer science and robot technology as well as artificial intelligence applications, which are outlined in the following section in order to complement this first perspective.

Computer science perspective on HCI


The ongoing development in recent years brought a larger paradigm shift to HCI: new sensors allow for new interaction modalities.Footnote 1 While mobile devices recently introduced the most notable change (touch and gesture-based interaction evolved into the main interaction modality), the advances in machine learning, cognitive computing and sensor technology will have a larger impact on how we interact with machines. We also see constant improvement in the field of language and speech processing. We will now review recent advances in HCI with regard to autonomous systems and then focus on how to create the interaction with such systems. It should be noted that we use the terms autonomous systems and robots synonymously.

Sheridan [35] reviews the status of Human-Robot-Interaction (HRI) in recent times and proposes a set of different challenges that need to be solved for proper interaction with autonomous systems. For the sake of brevity, we will focus on three topics that Sheridan [35] identifies: (i) tasks must be clearly divided between humans and robots, i.e., when developing the system, we must explicitly define which tasks should be carried out by humans and/or autonomous systems. (ii) A direct consequence of a robot’s tasks is its physical form, whose definition itself is a major challenge. Furthermore, possible consequences of robot actions must be determined to avoid negative outcomes. Finally, (iii) humans need a mental model of the robot’s capabilities and vice versa—also robot’s need to understand human actions and reactions.

Furthermore, research suggests that humans feel better when autonomous machines explain their actions [36] and communicate their intents [65]. For example, different modalities can be used to inform pedestrians about an autonomous vehicle’s planned actions [38], which is important because pedestrians tend to rely on direct communication with drivers. Autonomous systems should also incorporate methods for signaling problems to humans, especially due to the increasing reliance on a robot’s action. Users seem to perceive robots communicating failures and problems as more trustworthy [39]. Also, Honig, Oron-Gilad [40] point out that recent research tends focus on technical reliability of the autonomous system and do not take problems resulting from error-prone interaction between human and machine into account. The authors suggest a taxonomy specifically distinguishing between technical failures and interaction failures; the latter including interaction between autonomous systems and humans as well as between different autonomous systems.

Alongside the new HCI capabilities, a new quality aspect for applications emerged: the User Experience (UX). Users today expect applications not only to function well, but to be fun and enjoyable, too. While no widely accepted definition of UX exists, most acknowledge it to be a highly subjective matter and that interaction with the application and context of use play a dominant role [41]. Hassenzahl, Tractinsky [42] describe three UX aspects derived from scientific literature: technology is more than a mere tool; the user’s context and internal state are important; and each user has an individual, subjective sense of good UX. Their result is that these three cannot be separated, but overlap, and they conclude that HCI surpasses the pure usefulness of applications and lies at the core of a good UX. Also, Hinckley, Wigdor [43] and Watzman, Re [44] define UX with a special focus on HCI. Furthermore, the authors emphasize that constant testing and evaluation with users are crucial to achieve a good UX.

However, creating a suitable UX is a challenging task [45]. While their work not directly relates to autonomous systems, the underlying assumptions are as well valid: designing UX implies to specifically define UX goals. In their example, they define two seemingly contradicting UX goals: using the system feels like magic and sense of control over the system. While the former should surprise users, the latter implies that the system should react as expected. Both UX goals can be easily mapped to environments with autonomous systems: while they most certainly feel like magic—some mechanical thing doing something on its own—humans may still need to know why systems do things to not lose the feeling of being in control.

Tonkin et al. [46] describe a method to develop UX in HRI-application based on LeanUX [47] and Agile Science [48]. While the authors’ focus is on creating applications using humanoid robots, their approach can be translated to any application involving autonomous systems. Essentially, they extend classic UX design with two additional steps: personality design and interaction design. Personality design involves giving robots a demeanor depending on the desired UX. In simple terms, designers must define how autonomous machines react to humans. Are they submissive to humans or competitive (which could be used for, e.g., serious gaming in working environments)? Humans tend to assign personalities to humanoid robots [49], thus specifically integrating personality design into the UX design process supports creating an appropriate UX. In any case, the designed personality must match the desired task, context, and working environment. Besides personality, Tonkin et al. [46] suggest to specifically design interaction with the robot. As described before, interaction design is an important part of UX design and thus must be specifically tailored to the task at hand. In the approach of Tonkin et al. [46], UX designers should specifically model the robot’s behavior for each activity, location, and task.

The UX always depends on technical constraints defined by the used hard- and software, essentially creating a natural threshold for possible interaction. While we should of course aim for the best UX, we need to settle for the optimal UX. UX development thus requires a cross-functional team having a task-oriented design and a technology-oriented engineering perspective to determine the best suitable compromise between desirable UX and possible technical capabilities. Designers and engineers, however, tend to develop different solutions [50] and have different views on desired tasks and systems: while designers focus on the developed task and the user’s perspective—a top-down view—engineers tend to take technical capabilities more into account—a bottom-up view [51]. Engineers thus tend to create systems based on existing possibilities (essentially adapting the user to the system), while designers tend to generate alternative ideas (adapting the system to the user’s needs). Both approaches have their advantages and disadvantages, but in an optimal development they cannot exist without each other and both perspectives must be considered to find the optimal solution—as Tonkin et al. [46] suggest, UX development requires cross-functional teams.

Unfortunately, UX cannot be measured, thus iterative development steps and constant testing are necessary [52]. It is furthermore decisive to identify possible sources of critical UX and optimize the affected components. Steinfeld et al. [53] describe a set of metrics for HRI that are still used to evaluate autonomous systems [54] of which three biasing effects might influence the interaction’s effectiveness and performance: communications, robot response, and user. Especially the latter emphasizes that besides technical capabilities a deeper understanding of the human’s role within the system is necessary to create an acceptable UX. Steinfeld et al. [53] point to Scholtz [55], who differentiates between five possible roles for humans: supervisor, operator, mechanic, teammate, and bystander. While each of them might interact with the system, their tasks and requirements differ, thus their respective UX and forms of interaction must be considered.

In our current work, we are focusing on the teammate and bystander roles. Both require humans and autonomous machines to consider each other and maybe—depending on the underlying approach to cooperation—to find consensus for actions. We developed a consensus algorithm for purely autonomous systems that we will subsequently extend to integrate humans.

Experimental HCI setting

Autonomous systems make decisions and act independently in their environment. However, this does not exclude them from communication with other systems operating in the same environment. In the most challenging case, this can range up to cooperation in order to achieve overall goals, for example in a warehouse setting [e.g., 56] or to investigate serious accidents [57]. However, the need for communication and negotiation begins as soon as there is potential access to shared and limited resources in the environment. Communication can be solved differently, either centralized [e.g., 58, 59] or distributed [e.g., 60]. The term resource is not limited to concrete objects, but is broadly defined in this context and includes everything whose allocation can lead to a potential conflict, such as locations on which a system moves or wants to store something. Especially in logistics, this results in a multitude of situations in which autonomous systems must encounter each other and find joint solutions. In a work environment in which humans and robots increasingly work together, this also affects humans as soon as they act in areas that are also used by autonomous systems. In the following, we demonstrate an example application to avoid collisions on intersecting paths to show how a system for decentralized consensus finding can look from a technical point of view and discuss the challenges that exist to integrate people into such a system. We also structure the solution space and discuss possible solutions.

Finding consensus in decentralized autonomous systems

Decentralized decision-making means that there is no central system connected to all autonomous systems in the environment and makes higher-level decisions to coordinate them and resolve possible conflicts. This is first of all a question of the system design and has to consider different advantages and disadvantages. A centralized approach concentrates complexity since decisions are only made in one (logical) place. This allows to avoid simply and effectively many classic problems such as concurrency. At the same time, an entity has comprehensive knowledge of the situation and can derive suitable solutions. Since all autonomous systems are linked to the entity, this automatically prevents decisions which might lead to a conflict instead of a solution. On the other hand, centralization leads to difficulties compared to decentralized communication infrastructures in ensuring, for example, reliability [61]. If the controlling node of the system fails, the entire system no longer works. A fallback mechanism must always be installed in this case to safeguard the system’s functionality. This is particularly important in systems that interact with a real environment in which humans act, too. For larger solutions, the scalability of the system is often a problem, since the computing effort increases exponentially with the number of autonomous units. Imagine, for example, a traffic control system for all vehicles in just one city area. One solution is to reduce the area of responsibility of a central unit to such an extent that its complexity remains manageable. For traffic control, for example, this can be done by restricting the geographical area of responsibility. Then, in turn, the entire infrastructure must be upgraded to such an extent that a central instance is available for the whole area at every possible point of conflict, which will handle the control. This also applies to areas that are rarely or only potentially affected, which quickly becomes a cost factor for such architectural decisions in real world applications.

A possible solution is not to delegate the decision-making process to external nodes, but to rely on the autonomous systems themselves. This removes the need for area-covering control systems, since in case of a conflict the systems involved have the necessary logic in place. It also increases security in means of reliability and latency [61], since conflicts can be solved independently and regardless of the reliability of a central system at any time. However, there may be situations in which no participant has a complete overview of the overall situation, instead only has partial knowledge. This also means that there is no single central instance that can guarantee at all times that all conflicts in the overall system are handled properly. It is in the responsibility of a group of nodes, their communication, and the emergent behavior, which is typical for such systems. This also complicates the system development, since logic and the guarantee of the correctness of the system can no longer be carried out locally [37], instead completely distributed over a set of autonomous systems, whereby the exact composition of the systems need not necessarily be known at the time of development. This is often the case, for example, in CPS [24].

To simplify the engineering of such decentralized approaches, we have developed a pattern for designing a suitably decentralized consensus algorithm. This is particularly applicable for security-critical systems, as it prefers communication and propagation of information mainly at conflict-free times and, thus, tries to minimize the communication effort at the time of danger or ideally to avoid it completely in order to be as independent as possible from external influences such as network latency. Figure 2 shows the design pattern behind. The system is divided into two states. The first one is the normal state in which the system exchanges status information with other systems. These are only the systems in which there is likely to be a conflict in the near future, i.e., only a subset of all systems in the environment. Any discover protocols or distributed registries can be used to find them. Each system is exclusively responsible for its own future conflict situations. Since at least two systems have to be involved in each case, appropriate redundancy is guaranteed. If an expected future conflict is calculated from the information exchanged, the system switches to the conflict resolution state. Here rules are stored, which calculate a reaction directly from the previously exchanged data. These must be designed in such a way that they come to the same solution regardless of which of the conflicting systems they are running. This does not mean that both choose the same reaction, but that no one determines a reaction that the other does not expect. The following example of a decentralized intersection control system for autonomous vehicles shows what a practical example looks like.

Fig. 2
figure 2

The underlying design pattern for the consensus algorithm

Prototype for collision avoidance in autonomous traffic

Our example scenario describes a decentralized system for collision-free control of intersecting travel paths of autonomous systems [37]. The general problem can be applied to autonomous systems in road traffic, warehouses, or industrial areas. We assume that all involved autonomous vehicles know their position, intended travel path, and current speed to determine estimated travel times into potentially dangerous zones with possible collisions (henceforth: danger zones) independently. While we focus on fully autonomous vehicles, the same principles apply to automated guided vehicles as well if their paths intersect, with the constraint that their movement options might be strictly pre-defined.

The general prototype works as follows. All vehicles continuously store their own data in a distributed hash table. The data includes current position, planned route, and planned speed. The key is derived from the vehicle’s local position, which enables other vehicles to quickly find all information about relevant vehicles in the surrounding area without having to query all possible vehicles. Based on this information, each vehicle can now check independently in advance whether a collision is to be expected in the future and with whom. If an impending collision is detected, the respective vehicle goes into a state for danger prevention. The practical problem here is that both involved vehicles could decide to brake in order to avoid a collision independently. As a result, the collision moment only shifts, but the problem itself is not resolved. A joint consensus is necessary, which can either be achieved through an applicable communication protocol or communication-free through our approach. We thus reduce dependencies on critical elements such as network communication. In order to find a consensus, in the sense of a joint, consistent and fitting solution, a the system follows a few simple rules. Algorithm 1 below outlines our consensus algorithm. In simple terms, autonomous vehicles estimate their own and the other vehicle’s time of passage through a potential danger zone. As a rule, the later arriving vehicle will brake (which should also optimize the traffic flow for efficiency). In the relatively unlikely scenario that both vehicles enter the danger zone at exactly the same time, the vehicle with the smaller ID brakes in this case. The other vehicle simply maintains its current speed. This way guarantees that the joint solution is working and impending collisions are prevented. We also added some extended features, such as the selection of energy-efficient decisions or the integration of vehicles with special rights [see 37 for details].

figure a

Algorithm 1. Decentralized consensus protocol for conflict avoidance

The described system prototype works well as long as all participants in the system are autonomous vehicles that communicate quickly with each other via computer interfaces and follow the defined rules deterministically. In many areas, however, such systems do not consist exclusively of autonomous vehicles, but also involve humans. Thus, human actors must be integrated into our distributed consensus finding. Humans, unfortunately, cannot communicate at the same speed via the same machine interfaces. Furthermore, humans might act unpredictable or ignore potential technical support (e.g., navigational information). This poses several challenges in the necessary HRI-components. If a person moves through a danger zone, it must be ensured that all the necessary data is exchanged, that the decision is made quickly enough, and that it can then be executed without risk for any participant. We propose a potential model for collaborative decision-making in HRI allowing for several variations. When developing a system requiring interaction between humans and autonomous machines, developers must decide for one of the options for each use case to unsure the optimal UX.

Potential models of human-robot interaction for collaborative decision-making

We divide our model into three different categories including several variations: human first, robot first, and hybrid.

Human first

Zero communication

The easiest way for machines to prevent the human speed disadvantage in communication is not to communicate with them in the first place. Autonomous systems can still detect humans independently using sensors and react accordingly, though [e.g., 62, 63]. Since there is no communication, robots cannot predict how individual people will behave, but may use a general predictive model for human behavior [64]. The only way to guarantee safety is that autonomous systems do not take any action in any state where potential conflicts with humans exist. To avoid collisions, the robot has the tasks to detect humans, to determine all possible actions potentially endangering the human, and to avoid them. With regard to our prototype, autonomous vehicles will always brake if a human or human-controlled vehicle is in a danger zone. However, this inevitably leads to the overall system being potentially inefficient, since the lack of information exchange forces autonomous vehicles to take the most extreme measures even in situations where they would not be necessary.

Information-based reasoning

To prevent the system from becoming inefficient due to a lack of communication, it is necessary to exchange information so that autonomous systems can estimate the likely actions of humans. The information must not be provided by the humans themselves but may originate in other systems: information about the tasks assigned to humans and similar data can also be helpful, as they have an influence on possible action. Robots can thus build a more individual model of human decisions and actions. Nevertheless, humans might still react irrationally and thus provoke potential conflicts. For example, what if an employee sees a colleague from a distance and decides to walk in his or her direction? There are usually actions and interactions taking place in the environment not directly accessible to autonomous systems. Expected actions can only be weighed up probabilistically and a restrained behavior of the autonomous systems is still required. However, such a system is a solution that gives people the greatest possible freedom. But especially in crowded scenarios where an autonomous system has to react to people most of the time, it has to be considered how far these degrees of freedom lead to autonomous systems in this area not fulfilling their tasks efficiently because they can always only act secondary and have to react cautiously due to the partial unpredictability.

Human to robot communication

In order to minimize uncertainty, a unidirectional communication channel from the human to the robot may be used. Different options exist. Mobile systems such as smartphones or wearables are particularly interesting for this scenario. Humans may announce planned actions (e.g., going to place x) and thus offer a possible interface to communicate with the autonomous systems. On the side of the autonomous systems, this leads to improved models as prior unknown human actions are now plannable. However, various new questions that cannot be answered from a technical viewpoint exist. How do people interact with the system knowing that they can always influence things to their advantage? This may not be a problem in some scenarios, but situations where it is problematic are possible. In our intersection prototype, for example, vehicles with special permission may appear when used in real road traffic, which require an unconditional priority. Leaving full control to human participants here can possibly lead to misuse to gain personal advantages (i.e., always having priority in crossing the intersection).

Robot first

In contrast to privileging humans, in a Robot First approach, the duty of care can also be transferred to humans. This scenario is rather common nowadays: in factories, special fenced off areas for robots are often used indicating danger zones for humans (essentially simply locking humans out of the danger zone). An alternative could be providing information about robot actions to humans. An autonomous system’s actions are typically deterministic in many cases and can be displayed via corresponding unidirectional communication channels. For example, autonomous systems could project information about their actions in front of them [38]. Another possibility could be to transfer information to a device that humans carry, e.g., augmented reality glasses [65]. The autonomous systems then act as usual and humans are tasked with paying attention when entering danger zones. Human errors inevitably lead to safety problems, as the involved autonomous systems will not consider human actions specifically. In addition, it must be discussed whether such an approach fulfills the desired safety requirements. Another problem might be overloading humans with information in complex environments, potentially overwhelming them and provoking dangerous situations due to disorientation. Particularly in safety-critical areas, a potential danger to humans is not acceptable, while in reality such an approach will probably only work with additional security mechanisms in autonomous systems that prevent emergencies.


The last option is equal integration, which requires bidirectional communication. Various verbal and non-verbal possibilities have been discussed in the past [66]. Autonomous systems need information about the human participants in the environment and these in turn must be able to estimate the planned actions of the autonomous systems. With all the approaches discussed above, it can be shown that these have problems in various areas. Perhaps the Hybrid approach can help by allowing people and autonomous systems to work together, as in our Robot first approach, to find a consensus on the most suitable solution to potential conflicts and, thus, maintain the efficiency and functionality of the overall system. In order to achieve such a system, there are many research questions to be answered in the future such as the design of the bidirectional communication channel and to the best of our knowledge not solved by a comprehensive solution.


To compare the outlined approaches, we investigated the consequences using our intersection prototype. The experimental setup, some preliminary results, and possible limitations are discussed now. For the original prototype, we used Anki [67] Overdrive,Footnote 2 a set of autonomous model cars driving on a track assembled individually. Anki comes with a Software Development Kit (SDK) providing all necessary means to influence the vehicles, e.g., accelerate, break, change lanes, etc. We used the SDK to connect digital twins to the vehicles that essentially implement the algorithm outlined above. The digital twins and the corresponding vehicles communicate via Bluetooth, which introduced some latencies into the setup. The original prototype was initially designed for purely autonomous systems. We extended the prototype and introduced humans and their interaction with the autonomous components. For this purpose, the control inputs of a vehicle were redirected in such a way that humans could operate a vehicle remotely via an application running on a desktop computer.

For the scenario Human First, we prioritized the human-controlled vehicle and examined a version in which the autonomous vehicles always reacted as soon as possible to the human-controlled vehicle, typically by breaking and reducing their velocity so the human could pass before them. Due to the physical environment of the prototype, this could of course only take place within certain limits. Unforeseen actions shortly before the intersection could therefore result in a too short reaction time window (for both human and robots) and, thus, increase the danger of a potential collision. Responsibility for collision avoidance was centered in the autonomous vehicles.

For the scenario Robot First, we removed the prioritization of the human-controlled vehicle and also forced the autonomous vehicles to ignore the human-controlled vehicle. The autonomous vehicles thus never reacted to the human-controlled vehicle and the human had to adapt their actions. Here it was the human’s task to avoid collisions.

In the Hybrid scenario, we implemented a version in which all participants paid equal attention to each other and found a consensus as described above. However, no suitable communication channel supporting the bidirectional exchange of information between autonomous vehicles and human drivers at the necessary speed exists right now, which obviously is a challenge as discussed above. We therefore replaced the human driver with a virtual agent randomly ignoring the calculated consensus with a probability of 5% to mimic irrational human behavior, but otherwise following the protocols like the autonomous systems to come to joint solutions.

The tests ran for 10 min each, whereby the autonomous vehicles were able to adjust their speed randomly over time. This was necessary to avoid the overall system falling into a (theoretically possible) perfect state, where all vehicles could maintain their velocity without causing problems in the danger zone. The random generator was always initialized with the same seed, so that the same sequence was guaranteed in all setups. For each run, we acquired several variables:

  • The distance covered by each vehicle, provided by the underlying SDK.

  • The number intersection crossings by having the digital twin count how many times a vehicle would pass a defined position.

  • The number of collisions per vehicle by manually counting all occurrences and identifying the involved vehicles.


Table 1 shows the results of all scenarios for distance and number of intersection crossings. On average, it can be seen that the two robots involved have roughly the same results. This corresponds to the expectations, as both robots have the same configuration and no different prerequisites. It also shows that no unwanted misconfigurations have occurred there, as otherwise, we would expect to obtain different values for the robots that are not connected. However, the human results differ from the robots and between all three scenarios. In total, results are highest for the Hybrid scenario and lowest for Human First. Figure 3 contrasts the distances and Fig. 4 the number of collisions. Again, from all scenarios, Hybrid scores best with the least number of collisions. Interestingly, humans could only achieve the same values as the robots in the Hybrid. The distance and the number of crossings were smaller in the other two experimental situations. This even applies to the Human First approach, where the robots consistently consider human decisions. Because of the higher total distance covered and the larger number of crossings, the number of collisions was also the lowest here.

Table 1 Evaluation results
Fig. 3
figure 3

Collisions due to uncertainty and delays

Fig. 4
figure 4

Distance covered by the actors involved

In the Human First approach, on average, the least distance could be covered, as the autonomous systems acted with caution and the overall system thus became slower. We recorded an increased number of collisions, mainly due to the failure of the autonomous systems to react in time to very short-term human decisions. Some collisions must be attributed to technical constraints—specifically network latency—within our prototype environment, but as we used the same environment for all three test runs, the influence on the overall results should be minimal. Interestingly, the constant reaction also had an effect on the collisions between the autonomous systems. An autonomous vehicle’s reactions to the human driver instantly triggered cascading reactions from other autonomous vehicles since these had to adapt to the changed environment and coordinate their actions.

In the Robot first approach, the distance covered by the autonomous systems was considerably increased, but it was much more difficult for the human-controlled vehicle to cover a longer distance, because they had to act cautiously. In this run, no collisions between autonomous vehicles were detected, but constantly with the human-controlled vehicle. In the Hybrid approach, the distance covered is still relatively high and, in particular, very balanced between autonomous and human-controlled vehicles. The cumulative total distance of all participants was highest in this run. Although collisions between autonomous systems occurred here in a similar way to the Human first approach, the number was lower. Overall, the number of accidents was lowest here.

Our experimental setting provides hints that it is worth developing methods that lead to a hybrid and equal cooperation between humans and robots. Therefore, it is necessary to dive deeper into the relevance of the human factor from a work science perspective as provided in the next section in order to develop promising HCI concepts.

Work science perspective on AI adoption and acceptance

AI adoption and acceptance

Drivers are an important group of workers in logistics and transportation. Automation and AI applications such as cruise control systems [10, 68, 69] are introduced in their work environment to improve efficiency and competitiveness, safety of drivers and other traffic participants as well as working conditions [70,70,71,73]. The outstanding problem is the often insufficient connection between humans and automation resulting in critical issues [74]. Cummings, Bruni [75] emphasize the necessity to promote the HCI as a collaboration especially in highly complex decision-making situations since not all factors can be included in the algorithms. They describe functions that can be assigned to the actors (either human or AI) in the decision-making process differentiated in (i) the moderator to ensure the progress of the decision-making process; (ii) the generator who searches, identifies and develops decisions; and (iii) the decision-maker who finally decides or has a veto.

However, the use and outcome of such automated concepts in logistics depend on the collaboration with humans and their acceptance of such systems [75]. This is true also for the simulated traffic control situation on shopfloors, e.g., by forklifts as these vehicles driven by human workers or by AI as automated systems have the joint task to avoid accidents and share the same traffic space efficiently. Thus, the potential for substantially improving performance often is obstructed by users’ unwillingness to accept and use of technology [74, 76, 77]. Certain models and theories aim to explain the acceptance of new technologies like the UX models mentioned in Section 3. For example, the Technology Acceptance Model (TAM) measures the effects of certain functionalities (design features) of computer-based information systems in organizational contexts on user acceptance with the help of intervening variables. Davis [78] assumes that concrete functionalities, both the perceived usefulness as well as the perceived user-friendliness determine acceptance. Perceived usefulness is defined as “the degree to which an individual believes that using a particular system would enhance his or her job performance”; perceived ease of use is defined as “the degree to which an individual believes that using a particular system would be free of physical or mental effort” [78]. So far, numerous empirical studies have supported the TAM [79, 80]. Furthermore, models theorize the adoption of an innovation by an individual [81] which can be transferred to dealing with new technologies and AI applications in digitalized work settings. The innovation-decision process starts from first knowledge of an innovation to forming an attitude towards it, to a decision to adopt or reject, to an AI or robotics implementation and use, and to confirmation of this decision [82]. Consequently, the process typically consists of the following phases:

  1. (i)

    Knowledge: The individual gains knowledge of an innovation and how it works; the acquisition of knowledge depends on socio-economic characteristics, personality traits, and the communication behavior.

  2. (ii)

    Persuasion: The individual forms a positive or negative attitude toward the innovation. The evaluation depends on the perceived relative advantage resulting from the innovation compared to the previous technology, the compatibility with existing values, experience and demand, the trialability as possibility to try out an innovation, the observability as the degree to which the results of innovation are available to others, and the system’s complexity.

  3. (iii)

    Decision: The individual engages in activities that lead to a choice to adopt or reject an innovation.

  4. (iv)

    Implementation: The individual puts an innovation to use.

  5. (v)

    Confirmation: The technology will not be adopted further if another superior innovation is available or if the acquirer is dissatisfied with the innovation.

The rate of adoption is the relative measure with which members of a social system adopt an innovation operationalized as the number of individuals adopting new technologies in a certain time. Most of the variance in the rate of adoption of an innovation is explained by the attributes introduced above: relative advantage, compatibility, trialability, observability, and complexity [82]. Reactance of people can be expected when they perceive restrictions given that they value freedom and autonomy [83, 84]. Moreover, there might be a gap between the individual’s views, attitudes, intentions, and the actual behavior as the Theory of Reasoned Action (TRA) proposes. This theory is based on the assumption that the behavior of individuals results from certain intentions which depends on the attitude of a person as well as on social influences [85]. The TRA is refined by the Theory of Planned Behavior (TPB) focusing on situations in which individuals do not have complete control over their behavior proposing that the individual’s self-efficacy is decisive [86].

In a temporal perspective, human interaction with AI applications and automation [14, 17] can be characterized by three hurdles or areas of resistance (see Fig. 5). Once an area is overcome, usually acceptance settles in [87, 88].

Fig. 5
figure 5

Human acceptance model for AI applications [11]

The three depicted hurdles (“increased resistance areas” or “waves”) are connected to three AI functional areas and represent an increasing, but temporary, level of resistance (y axis) throughout this development in line with an increasing level of personal intrusion (x axis):

  1. (i)

    Level of AI competences: Automation and AI applications require new competencies for humans. Since they are comparatively less frightening, the resistance level towards them is relatively low. For logistics, this may include for example the automated gearbox in truck driving, automated routing and navigation systems as well as automated intralogistics applications like order retrieval and warehouse transportation systems. These systems have in common that usually any final decision, e.g., regarding the traveled street, in reality are still taken by humans—and in many cases AI suggestions from navigation systems are not followed through by humans, an obvious sign of resistance.

  2. (ii)

    Level of AI decisions: Increasingly also in logistics, AI applications are providing management decisions, which usually rises greater anxiety and resistance levels with humans. This is the case for example in the expected physical internet environment, where AI applications undertake specific transport and distribution decisions as symbolized in the Robot first simulation case before [2, 3]. In such concepts and work environments, humans are more anxious as core management tasks are addressed and possibly human workers might become replaced. Understandably, this sort of AI application is rising higher levels of rejection among humans, usually also requiring a longer period of adaption before, again, acceptance can settle in as described for instance by Weyer et al. [15].

  3. (iii)

    Level of AI autonomy: In a final stage, AI applications are responsible for a bunch of different decisions, leading to autonomous behavior as for instance in fully automated manufacturing and shop floor environments [29, 89]. In these cases, humans usually adopt a passive supervision role [90]. These applications are at the doorstep to industrial and real-world application, in production and warehousing [12, 91] or road traffic (fully autonomous cars and trucks). This lets the highest level of resistance emerge among human workers as they usually feel excluded from day-to-day decisions and in many cases cannot really understand how decisions are taken (e.g., with which information or based on which algorithms).

Connected to the experimental setting before, these levels or hurdles can be seen throughout a sequential level of personal intrusion, arriving at a completely new situation after the three hurdle areas: The situation of trust with respect to an AI application, where humans are inclined to trustfully and intuitively cooperate with such applications [92]. This is closely related to the Turing test, where passing the test implies that humans are not able to distinguish between human or artificial for their communication with another unknown entity [93]. The stage of AI trust is a special form of passing the Turing test as it can be assumed that a human being may only be able to develop trust towards an AI application if an interaction-based evaluation can judge the collaboration partner to behave like a human being. This is a crucial and business-relevant form of trust between human beings and AI applications in logistics for a successful partnership, taking into account realities in a world of human cooperation as well as HCI. In addition, this is extending the traditional view of technology acceptance in the past [80], where application-specific trust and acknowledgement of human workers was tested and analyzed. Now, long-term work situations like driving and machine handling with possible life-threatening situations are concerned with a required trust towards AI applications. Trust in turn can be seen as a major prerequisite for workers to develop intuition as this can only be trained and grown by trial and error experiences in practice, not in theory. Therefore, the next section is discussing the question and growth circle of intuition and self-efficacy along such experience processes for workers.

Intuition and self-efficacy in logistics

Recently, it has been argued that intuition might be able to complement rationality as an effective decision-making approach in organizations [94,94,96]. Intuition helps to cope with a wide range of critical decisions and is integral to successfully completing tasks that involve high complexity and short time horizons [97]. However, conceptualizations of intuition lack clarity so far. In general, intuition is differentiated in reliance on gut feelings (creative intuition) or in reliance on past experiences (justified intuition) [98]. Adding to this discussion, Carter et al. [99] consider intuition as a major human driving force in decision-making for logistics management. On the basis of a qualitative content analysis of academic literature and interviews with supply chain management experts and quantitative testing with experts in supplier selection, they conceptualize intuition as a multidimensional construct consisting of the following dimensions: (i) Experience-based intuition implies that persons recognize parallels to past decisions in making the current decision and, thus, refer to knowledge that builds over time, (ii) emotional processing means that affect or (positive and negative) feelings guide decisions and actions, and (iii) automatic processing implies that persons quickly and almost instantly decide without awareness or knowledge of specific decision rules. Based on their analyses, they argue that intuition is rather experience-based in situations with high time pressure, while emotional processing can be found in contexts with both, high time pressure and high information uncertainty. Against this background, obviously dual-processing theories (intuition vs. rationality) seem be too simplistic since different dimensions of intuition occur likewise.

Any approach to intuition has in common that intuitive judgments occur beneath the level of conscious awareness (i.e., they are tacit). The importance of intuition in working behavior has already been emphasized for top executives revealing that intuition was one of the skills used to guide their most important decisions [95, 100]. Khatri, Ng [101] surveyed senior managers of companies representing computer, banking, and utility industries in the United States and highlight that intuitive processes are in fact used in organizational decision making conceptualizing intuition as a form of expertise or distilled experience based on deep knowledge. Also Hogarth [102] argues that intuition is effective when a person is knowledgeable and experienced within a certain domain. The effective use of intuition has even been seen as critical in differentiating the more from less successful workers [97]. Especially for workers in digitalized logistics settings, intuition seems to be decisive in view of the requirement to cope with a wide range of critical decisions, highly complex tasks and short time horizons. This can interestingly underline and explain the results in the experimental setting from Section 3 with the hybrid decision model as the most efficient one.

In search for approaches to enhance intuition, Agor [100] reveals that physical and/or emotional tension and anxiety or fear are conceived as factors that impede the use of intuition, and in turn, positive feelings such as excitement lead to an enhanced sense of confidence in own judgments. Burke, Miller [103] argue that mentors, role models, and supervisors as well as working with diverse groups of people and learning about their decision making styles helps to develop intuition. In this sense, intuition is based on implicit learning [104] and automaticity [105]. Interestingly, these key determinants for using intuition tie in perfectly with the core assumptions of Social Cognitive Theory [SCT; 106]. SCT accounts for the development and maintenance of self-efficacy across a broad range of knowledge, values, and associated behavior patterns [107]. An efficacy expectation is the conviction that a person can successfully execute the behavior required to produce certain outcomes. In this sense, self-efficacy means that coping behavior will be initiated, how much effort will be expended, and how long it will be sustained in the face of obstacles and aversive experiences [108]. Thus, self-efficacy regulates behavior, effort, and persistence over extended periods of time and is concerned with individual beliefs about ones capability to cope with certain and new situations, to use ones abilities to solve problems and tasks, e.g., whether they feel competent in using and learning how to use new technologies. In this respect, it can be expected that individuals with high self-efficacy are more proactive and confident in their decision-making, dealing with digital devices and show greater intuition. Moreover, self-efficacy is similar to the concept of perceived ease of use as defined above [76]. In work contexts, especially following leverages are identified to support the worker’s self-efficacy: (i) Performance accomplishments are based on personal mastery experiences in a sense that repeated success enhances self-efficacy, (ii) vicarious experience means that seeing others performing activities and its consequences generate expectations that persons will improve and intensify their efforts as well, (iii) verbal persuasion is quite common to influence human behavior, i.e., people are led through suggestions, and, finally, (iv) emotional arousal affects perceived self-efficacy in coping with certain situations, i.e., removing dysfunctional fears.

In our context of logistics workers, the concept of self-efficacy is decisive for the use of digital devices and the perceived ease of use. Also, the role of intuition has already been emphasized for successful workers in other fields guiding important decisions in working life. However, more research is required to understand the role of human intuition within an IoT and AI application environment in logistics and supply chain processes. Especially in these fields, we find new technologies and digitalized working contexts requiring the intuition of the workers, affecting their behavior, and the reaction speed. Moreover, the link between intuition and self-efficacy has not been discussed although these two concepts show striking commonalities and provides approaches to further develop human intuition as basis for effective decisions in digitalized work contexts in logistics. As outlined, AI is developing fast within the supply chain and logistics management domain and there is a considerable body of literature concerning intuition and self-efficacy. However, both subjects are brought forward mainly independent of each other. This constitutes a major research gap, as obviously HCI concepts and existing business experience are testimony to the fact that the human factor is largely influencing technology implementation and AI efficiency, especially in logistics as seen in Sections 2 and 3. Thus, research concepts combining these perspectives are needed urgently as mid- to long-term lead times can be expected until results are obtained and transferred into applicable concepts for increasing AI application effectiveness in logistics.


Summary of findings

The three different discipline parts and perspectives have established distinctive findings for the use in a HCI efficiency description in production logistics:

  1. (i)

    The production logistics management state-of-the-art analysis and process study provided the insights that human workers have to be included in any automation steps and projects in order to improve effectiveness as well as the fact that automation is happening in many contexts and areas. Although there is a broad concept development with the IoT and I40 strategies, it is also obvious that not all implementation endeavors will be profitable in any context. Therefore, the research requirement of differentiating between more or less promising HCI approaches in production logistics automation is crucial and important for automation success.

  2. (ii)

    The computer science literature review and simulation has established the facts that HCI and HRI scenarios and concepts are on the move and often referencing to direct (ergonomic) interaction of humans, robots and cobots. Therefore, the simulation study presented here with individual tasks (traffic control, vehicle steering) of humans and robots alike is a new approach with interesting insights regarding HCI on a “peer to peer level”. In this context, simulation results provide hints that hybrid decision models might be more efficient than only human- or robot-centered approaches.

  3. (iii)

    The work science analysis outlined that management concepts especially regarding technology acceptance, intuition and self-efficacy of workers are highly relevant for HCI concepts. Especially human resistance towards automation and AI or robot applications are crucial and differ regarding the degree of acceptance. Therefore, also management approaches addressing HCI in production logistics have to incorporate such human intuition and resistance questions.

Conceptual contribution–HCI efficiency description

As identified with the preceding analysis steps, the question of HCI in production logistics is—among other factors—influenced by the question of acceptance, adoption, intuition and reaction speed of the human as well as the computer actor(s). The following figure is outlining the presumed connection between HCI and decision efficiency as a possible success factor, indicating that there might be an optimum feasible regarding the adjusted learning and reaction rates of humans as well as automated actors (see Fig. 6). On the x-axis, we determine the “reaction mode” of the actors in the system: whereas on the left side, only human actors are reacting to the computer actors (e.g., letting automated transport vehicles pass when a crash seems likely, therefore “taking a step back”, representing a Robot First approach), on the right side only the autonomous computer actors are reacting to the (fixed) actions of human actors, e.g., stopping when human actor transport vehicles are approaching in order to determine right of way and avoid accidents (Human First approach). The y-axis is representing the level of decision efficiency, e.g., measured in the number of passes realized at an intersection—or alternatively by the number of successful passes in relation to accidents occurring.

Fig. 6
figure 6

HCI efficiency description

The interesting area is the development of actions in-between, when both sets of actors increasingly (and v/v decreasingly) react to the actions of the other actor group. It is proposed therein that the actual decision improvement (measured, e.g., in the number of vehicles passes handled at intersections) is strongly depending on the mutual learning/adjustment/reaction rates of human and computer actors. With low and high adjustment rates, it can be presumed that decision efficiency will decline as actors either commence into a “deadlock” (e.g., human and computer vehicles both stopping at intersections, waiting for each other) or in too much mutual activity (e.g., vehicles starting and stopping too often in reaction to each other). Whereas it can be conceptualized that a medium rate of adjustment and reaction on both sides might be able to actual improve decision efficiency as depicted, forming an “optimum lens” of HCI interaction.

Practical implications

It has emerged in the interdisciplinary analysis approach of this paper, that the most single important factor for successful automation concepts is embedded in the question of implementing new technologies and how to integrate human workers into the pathway towards automated systems [74]. It is potentially less crucial how the final “steady state” automated production system is looking like—given that new systems will emerge very fast, too—but more how the way getting there is structured, how framing and motivation are incorporated to optimize HCI [69, 109]. Consequently, the successful AI and robotics implementation in HCI contexts requires [110].

  1. (i)

    that AI is used as an aid to the worker—including the UX design requirement—and not only as an improvement in manufacturing,

  2. (ii)

    that the workers are informed on the upcoming change, see an advantage in using new AI and robotics systems, have the possibility to test such systems, and the introduction of AI represents only a small change in existing assembly practice; and

  3. (iii)

    that the solution combines the strengths of human workers with those of AI and robotics.

Whereas many ideas and experiences might be transferable from existing approaches to improve the efficiency and success rate of automation concepts and HCI in production logistics settings, also new insights are required as for example how automation and robotics concepts have to be designed in order to allow for a smooth change management in this specific application field of production logistics and to develop the worker’s self-efficacy as outlined in Section 4 [11, 17, 19, 111, 112].

This is highly relevant for management practice as there are many projects for further automation steps at the doorsteps in most companies. Especially the HCI efficiency description as highlighted in Fig. 5 might be an important practical contribution in this context, as especially for operations people like engineers the suggested best solution with a medium adjustment rate or speed for human and AI actors in production logistics settings might be counter-intuitive.

Conclusion and outlook

Altogether, the future competitiveness and logistics performance will depend on the described factors regarding human acceptance, intuition and interaction in HCI and HRI or other concepst like self-efficacy. The challenge for production logistics will include the question of how to align and propagate the human factor with automation in production logistics as highlighted in this study. The specific contribution of this paper consists of the fact to establish the value of an interdisciplinary approach to HCI and HRI settings in production logistics and can be outlined as follows:

  • The first results point in the direction of mixed systems and steering mechanisms with human and robot actors cooperating also in small-scale decision making (traffic control example) being most efficient, superior to one-sided models like Robot First or Human First decision approaches.

  • In the proposed HCI efficiency description explained in Section 5 the cooperation approach was further detailed, warranting further research and testing in order to arrive at optimized and efficient settings in production automation.

  • Practical implications entail suggestions for the design, implementation and revision of AI and robotics systems in production work settings.

The limitations and avenues for further research can be outlined as follows: (i) while our work is pioneering, we still have some limitations to clarify. However, a first trend towards hybrid systems needing further investigation is visible. The intersection prototype was implemented with Anki [67] Overdrive, a track on which small model cars can drive autonomously. The Anki SDK provides methods for algorithmic control via Bluetooth, leading to the aforementioned latency delays. No programming logic can be deployed directly to the vehicles, so each vehicle is represented as an autonomous digital twin (a so-called agent) calculating and interpolating the necessary data within our system but leading to the aforementioned latency delays. For example, the vehicles do not have any distance sensors, forcing us to calculate necessary information from the overall system status and distributing it to all agents. (ii) We also mimicked a human-controlled vehicle in the Hybrid run due to the lack of a suitable communication interface between humans and vehicles. The results therefore only serve as a base for assessing the possible properties of such a solution. (iii) Moreover, limitations of this study include the fact that only production logistics is addressed and the applied HCI simulation sample is representing a very restricted setting. This would have to be extended for further insights, e.g., regarding the number of vehicles, the human and computer driver mix or the geographical extension. (iv) Finally, the proposed HCI efficiency description requires further discussion and empirical grounding. Especially qualitative and quantitative research in different industries would be required to validate several aspects.

Interdisciplinary research approaches and results are highly warranted and required in order to provide necessary insights into the development and successful design of production automation in the form of AI and robotics implementation. This will be a high-profile research field in the decade to come in the light of IoT and Physical Internet developments [22, 113,114,115,116].