Keywords

1 Introduction: Human-Autonomy Teaming in Maritime Contexts

The concept of human-autonomy teaming (HAT) is used to describe humans and intelligent, autonomous agents working interdependently toward a common goal (O’Neill et al., 2022). HAT as a new form of collaboration is the focus of research under multiple heterogeneous terminologies such as human-agent teams (Chen et al., 2011), human–robot collaboration and hybrid teams (Straube & Schwartz, 2016), human–robot teaming (Endsley, 2017), or socio-digital teams (Ellwart & Kluge, 2019). In this chapter, we use the term HAT consistently.

HAT has been described as at least one human working cooperatively with at least one autonomous agent (McNeese et al., 2018). An autonomous agent is understood as a computer entity or robot with a partial or high degree of autonomy in terms of decision-making, adaptation, and communication (O’Neill et al., 2022). HAT provides new qualitative challenges for teamwork compared to traditional human–human teams (HHT) (Ellwart & Schauffel, 2021). Autonomy is capable of decision-making independent of human control. Chen et al. (2018) distinguish between autonomy at rest (e.g., intelligent software systems) and autonomy in motion (e.g., robots). Because functional HAT complementary combines the strengths of humans and machines (i.e., human intelligence and artificial intelligence, human and agent skills), HAT can achieve complex goals that are unreachable by either humans or machines alone. For example, the inspection of ship hulls needs to be time- and cost-efficient, precise, safe, and highly reliable—when humans and machines interdependently combine their expertise and strengths these goals can be achieved simultaneously.

To work interdependently, synergistically, proactively, and purposefully to achieve a shared goal, human members and autonomous agents in HAT regulate actions based on coordinative processes (e.g., communication) as well as cognitive and motivational-affective states (e.g., situation awareness, system knowledge, or system trust). Psychological models and research on HHT offer thoroughly researched taxonomies between team variables to explain and predict both dysfunctional and functional cooperation and coordination (Ellwart, 2011; Mathieu et al., 2008). These models of HHT have been transferred to human–machine interactions (e.g., Ellwart & Kluge, 2019; You & Robert, 2017) pointing out several key variables that are of high relevance also in the maritime context. The models show that functional HAT must be considered from a task-specific perspective in the maritime sector, balancing key perspectives on the human (e.g., human team members’ knowledge, skills, and personality), technical (e.g., features of the autonomous multi-robot system), and organizational sides (e.g., legal regulation or maritime culture, see Fig. 1)

Fig. 1
A block diagram connects the psychological perspectives on human-autonomy teaming with the organizational context, multitask ship inspection process, and the interface interconnected system of human team members and autonomous multi-robot system.

Source Authors)

Holistic perspective on human-autonomy teaming in ship inspection and maintenance

(Note aExemplarily multitask ship inspection process scheme [Task X1-Xn].

In the maritime context, the inspection and maintenance of large vessels such as bulk carriers is an important pillar of maritime services. Thousands of medium to large ships pass across world seas. To date, ship inspection and maintenance is a manual field of work but the introduction of autonomous systems (i.e., autonomous robotic systems, intelligent software agents, etc.) offers benefits for human safety (e.g., reduced work accidents), the economy (e.g., time and cost-efficient services), and the environment (e.g., reduced full consumption). The World Maritime University (2019) highlights the automation potential of inspection drones, repair robots, or condition-based maintenance systems, and emphasizes that advanced user interfaces will provide a whole new user experience.

In multiple interdisciplinary research projects (EU projects with focus on maritime autonomy concerning ships and ports, e.g., BUGWRIGHT2, 2020; RAPID, 2020; ROBINS, 2020) researchers and practitioners collaborate to unleash the full potential of such novel technologies. Among these, the EU project BUGWRIGHT2 Autonomous Robotic Inspection and Maintenance on Ship Hulls aims at developing an autonomous multi-robotic system for inspection and maintenance on ship hulls combining diverse autonomous technologies including aerial drones, magnetic-wheeled crawlers, and underwater drones as well as virtual reality and augmented reality in the user interfaces (see Fig. 1).

The present chapter has two central aims. First, we underline the benefits of combining psychological models, system engineering, and end-user perspectives to develop and introduce functional HAT in the maritime sector. Therefore, Sect. 2 elaborates on three psychological perspectives that are crucial for evaluating functional HAT and mirror the perspectives of system developers and end-users concerning these concepts in the specific application task of BUGWRIGHT2 (maritime voices). Second, we aim to reflect on future developments of HAT in maritime services. Therefore, Sect. 3 elaborates on the adaptability of HAT configurations and poses questions for designing the next generation of autonomous maritime technology.

2 Psychological Perspectives on Human-Autonomy Teaming in Ship Inspections

The implementation of HAT including multi-robot systems in ship inspection and maintenance will transform HHT into HAT. Research in work psychology and human factors outlines numerous interdependent factors that are relevant for functional cooperation in HAT (Ellwart & Schauffel, 2021; O’Neill et al., 2022; You & Robert, 2017). The chapter can only address a narrow selection of critical factors (see also Schauffel et al., 2022). We focus on three psychological perspectives that received profound scientific attention (e.g., meta-analyses, reviews) and were reflected as critical for HAT in the special context of ship inspection and maintenance across stakeholders: (1) the level of autonomy (LOA), (2) system trust, and (3) system knowledge and features.

The critical factors are reflected in the light of the specific maritime application within the project BUGWRIGHT2. The often-abstract theoretical concepts of research in work psychology take on a special significance against the background of voices from the perspective of concrete application and feasibility in the maritime sector. This not only highlights the ecological validity of the theoretical concepts, but also the need to involve end-users and developers in close exchange during the development of systems. Therefore, an extensive interview series was conducted to reflect on the potential needs, opportunities, and challenges of HAT in ship inspection and their consequences for end-user acceptance. Relevant maritime stakeholders (Johansson et al., 2021; Pastra et al., 2022) participated in the interview series (e.g., shipyards, service suppliers, shipowners, and ship inspectors). In line with theoretical models of technology acceptance (Venkatesh et al., 2016) that have been successfully applied to the context of HAT (e.g., Bröhl et al., 2019) and models of human-centered system design (Karltun et al., 2017), the results from 23 expert interviews point to multiple critical factors of HAT that holistically touch the human element, technological systems involved, and organizational context of maritime inspection and maintenance (see Fig. 1, thin arrows). Participation in the interview study was voluntary. Withdrawal was possible at any given time of the interview without consequences. The interviews were conducted video-supported and lasted in total 1 h each. Interview statements were documented and clustered qualitatively. The present chapter documents excerpts from the interview results. For further details on the interview methods and results see the official project homepage (BUGWRIGHT2, 2020).

Table 1 summarizes the psychological perspectives, supportive interview statements from maritime stakeholders (maritime voices), and empirical evidence from the field of work psychology and human factors that we focused on in this chapter. In detail, each perspective is discussed in the following paragraphs. Quotes included in the paragraphs refer to interviewees’ comments, which are compiled in Table 1 for an overview.

Table 1 Psychological perspectives for human-autonomy teaming including exemplarily interview statements from maritime experts and references to related research evidence

2.1 Level of Autonomy

Conceptualization. The level of autonomy (LOA) refers to the degree of system autonomy in HAT ranging from no autonomy (i.e., manual human control), and semi-autonomy (i.e., no system independence in task realization, the human can veto) to full autonomy (i.e., high system independence, the human is at most informed). The LOA is differentiated by four specific task types to refrain from abstraction (Parasuraman, 2000; Parasuraman et al., 2000): information acquisition, information analysis, decision selection, and action implementation. Each task type can be realized on each LOA. In addition to a task-specific evaluation of LOA, Schiaretti et al. (2017) highlight that concerning maritime autonomy each technological subsystem must be evaluated separately regarding the LOA. Exemplarily for a multi-robot system in ship inspection and maintenance, magnetic-wheeled crawlers, aerial drones, and underwater drones have different LOAs that might vary depending on the specific subtask (e.g., monitoring the steel plate thickness, generating options for the mission paths, or executing additional thickness measurements or visual inspections of critical hull areas based on former inspection reports).

Different LOAs have unique consequences for human operators and HAT performance. Multiple strategies exist to allocate a task to a human or an autonomous agent in HAT (Rauterberg et al., 1993). Their functionality or dysfunctionality for HAT can be evaluated based on well-established criteria of functional teamwork and human-centered work design (e.g., DIN EN ISO 9241–2, Klonek & Parker, 2021; Wäfler et al., 2003). Thereby, one needs to consider that high LOA does not necessarily result in human benefits (e.g., reduced cognitive load, monotony, or stress) but may also correspond to dysfunctional outcomes, highlighting the two-sidedness of high LOA concerning situational awareness and human control. The dilemma is that high LOA combined with high system reliability and robustness results in decreased situational awareness and the limited ability of the human operator to resume control in critical situations (i.e., automation conundrum, Endsley, 2017).

Maritime Voices and Concluding Proposition. Reflecting on the LOA (see Table 1), in line with theory and research, maritime experts highlighted the task-specificity of LOA (“We need different LOAs for different tasks”) when anticipating HAT in ship inspections. The process of ship inspection is a highly complex multi-phase process including preparation, operation, and reporting phases (Pastra et al., 2022). Referring to the task types by Parasuraman et al. (2000), maritime experts formulated the clear need for human decision-making for example when deciding on the to-be-inspected areas of the ships and the final evaluation of the results (i.e., seaworthiness certificate), challenging the allocation of responsibilities and decision rights within HAT. Also, the technology-specific focus on LOA was mentioned by maritime experts including clear anticipations of a rather high LOA for the magnetic-wheeled crawlers and lower levels for the aerial drones. In addition, it becomes clear that LOA is not static but a dynamic element of HAT, as constant technological development and team habituation might lead to flexible adaptation of a specific LOA. For example, “the LOA underwater is not clear at the moment,” considering current technological challenges regarding video streaming and localization underwater. Furthermore, maritime experts say that “humans need a feeling of control over the swarm teams,” thereby referring to humans’ basic needs. Humans have an inherent and fundamental need for control and autonomy (Deci & Ryan, 1985, 2000). However, the concept of LOA adopts a strong focus on system autonomy. The higher the system LOA the lower the control and autonomy of the human interacting with the technical systems. Large amounts of research from work psychology elaborated on the crucial role of human autonomy (i.e., control) in performance, individual well-being, and motivation (Deci & Ryan, 1985; Hackman & Oldham, 1976; Olafsen et al., 2018). It has to be the goal of HAT design to balance technical LOA and human control. Humans’ basic need for autonomy must not conflict with system autonomy. In addition, stakeholder statements indicate that the LOA serves functional HAT if LOA is high enough to enable parallel work and the optimization of existing work processes (“We need LOAs that allow people to do separate tasks at the same time as robots are inspecting the ship”).

Taken together, empirical evidence and stakeholder comments illustrate that LOA can serve functional HAT in ship inspection when agent autonomy and human control are constantly balanced on a task- and technology-specific level. There is no simple all-or-nothing principle, but LOA must be balanced and adaptable, evaluated, and designed against the background of the task at hand.

2.2 System Trust

Conceptualization. System trust describes the willingness to depend on technology due to its characteristics (McKnight et al., 2011). In the context of maritime HAT, the object of interdependence is multifaceted including heterogeneous robotic technologies (e.g., magnetic-wheeled crawlers, underwater drones). System trust depends on multiple factors that are rooted in the technology, human, task, and organizational context (see Hancock et al., 2011). For maritime applications, following Pastra et al. (2022), technical robustness and safety, data governance and regulation, and policies are the most vital elements of system trust. However, the authors emphasize that depending on the human element (e.g., skills), the specific vessel (e.g., age or type), and situational environmental conditions (e.g., in-water visibility) system trust might differ. Thus, system trust is not static but dynamic and develops over time. First- and second-hand experiences impact trust dynamics, and also dispositional aspects (i.e., ability to trust) are powerful for system trust in HAT, especially within the early stages of technology adoption (Hoff & Bashir, 2015). Subjective competence comparisons between a human and an autonomous agent impact system trust (Ellwart et al., 2022), given that humans have a basic drive to compare themselves with others in a group or a team (Festinger, 1954). Regarding the optimal level of system trust, not the highest but a well-calibrated level of system trust is requested, as both mistrust and overtrust are associated with performance reduction (Parasuraman & Manzey, 2010).

Maritime Voices and Concluding Proposition. Reflecting on system trust (see Table 1), maritime experts highlight that the maritime context might be a special challenge for HAT, stating that ship inspection and maintenance “is a traditional field of work, with high rigidity, low technology trust, and high skepticism.” Maritime HAT thus requests a paradigm shift and cultural change. High end-user participation might enhance such cultural change and establish system trust in maritime autonomy but the timing of end-user participation is focal. Especially early robot failures lower trust (Desai et al., 2013). Therefore “there is the risk of testing too early or too late.” Referring to the aspect of trust calibration (i.e., not too high nor too low system trust), end-users “need realistic expectations about robot features” including “a clear understanding of what the system can do and cannot do, with precise examples in terms of autonomous navigation and positioning.” Such mental models of HAT help humans to calibrate trust appropriately in routine and especially non-routine tasks. Of note, the consideration of system trust only falls short when discussing HAT in ship inspection, as multiple human stakeholders will remain active in the inspection process. Thus, interpersonal trust will remain focal alongside system trust. In addition, high LOA of single technologies requests a discussion on inter-robot trust which further complicates the topic of trust in maritime HAT.

Taken together, well-calibrated system trust that considers human uniqueness, as well as autonomy’s strengths and limitations, serves functional HAT whereas both over-and mistrust reduce HAT functionality. Thereby system trust is subjective and dynamic, developing over time with different trust levels for routine or non-routine situations.

2.3 System Knowledge and Features

Conceptualization. System knowledge is a key aspect of functional HAT and describes “the human’s understanding of the general system logic, its processes, capabilities, and limitations” (Rieth & Hagemann, 2021a, p. 5). In the context of maritime autonomy, two domains of system knowledge should be distinguished. First, short-term system knowledge refers to transparent communication and situation awareness in HAT. Here, interface design can help to achieve a constant level of high situational awareness and foster agent transparency (see Schauffel et al., 2022). Numerous research in human factors and work psychology highlights the importance of agent transparency or situational awareness as a crucial knowledge domain for system trust, adaptation, and coordination (Chen et al., 2018). Second, a long-term perspective on system knowledge refers to knowledge about system features, (team) goals, roles, and tasks. Different than situational awareness, long-term knowledge integrates the operators’ understanding of tasks, roles, goals, and work processes from administrative guidelines with learned experiences from operations. Here, for example, high reliability during operation is vital, referring to the accurate functioning of autonomy over time and the reproducibility of the tests performed (Pastra et al., 2022). Moreover, accurate mental models of HAT tasks, roles, and responsibilities help to establish well-calibrated system trust and guarantee appropriate human competences (e.g., by training or certification), as human competence demands will increase in HAT (Rieth & Hagemann, 2021b). Crucial for the development of situative knowledge and long-term mental models is communication between the system and the human operator. Communication helps to understand the current decisions of the system and integrate the experience into long-term mental models.

Maritime Voices and Concluding Proposition. Reflections from maritime experts support that high reliability (“Robots need to be sensitive to the ship structure with the reliability of 100 out of 100”) in combination with precise examples of robot strengths and limitations is strongly needed for functional HAT. It becomes evident that end-user participation reveals concrete technological elements that need to be considered in robot design (e.g., safe mode, proximity sensor, see Table 1). Maritime experts note that aspects of communication between human and autonomous entities in HAT are so far open questions. Communication needs to be two-sided meaning that humans can intervene in robot missions (“The human end-user has to be able to interfere if he decides to do so, based on his long-year experiences or intuition”) and autonomous technologies can contact humans actively in case of critical situations (“The robot should be able to give a warning sign to the human user”). System knowledge also refers to new roles and tasks that go along with the implementation of a multi-robot system (e.g., drone driving, robot calibration, see Table 1).

Taken together, functional HAT requires accurate knowledge about ongoing team processes plus knowledge about robot features as well as subsequent consequences for human competences, roles, and responsibilities.

3 Envisioning the Next Generation of Maritime Human-Autonomy Teaming

Looking at current developments of maritime robotic systems, as described above in the BUGWRIGHT2 example, it is noticeable that although the technical solutions include a certain degree of autonomy, it cannot yet be assumed that the systems are fully self-governed while operating in complex tasks. Visions of highly autonomous systems are being researched and developed. Here, autonomous robots take over complex activities and work interdependently with humans. The factors described above (i.e., system trust, LOA, and system knowledge) remain relevant for functional HAT in the next generation of maritime autonomy that includes fully autonomous systems but these factors are supplemented by a factor that is critical for self-governed systems: team adaptability. Adaptability means that systems can detect changes in the environment and select alternative courses of action that fit new situations. Adaptability in complex environments such as maritime inspections must be described and designed on different levels: (1) reactive adaptability, (2) reflective adaptability, and (3) long-term applicability and strategic adaptability.

Reactive Adaptability. A reactive level of adaptability means that a system comprising of humans and robots recognizes changing requirements and situations during task operation and can adjust behavior. In work psychology, Rico et al. (2019) speak of adaptation through implicit coordination during task action when team members anticipate the information or behavior needed in a given situation and react “automatically.” The prerequisite for this is that the autonomous technical system and human operator both have valid situational awareness to detect changes and possess appropriate knowledge of how to react in the given situation. As a result, there is no explicit command necessary, because the team of humans and autonomous agents “knows” about alternative action plans in certain situations or anticipates human needs. For example, in a maritime context, robots should recognize and avoid obstacles or be programed to communicate new undefinable sensory inputs to the operator without being asked. From a research perspective, there are a few empirical papers on this type of adaptability, mostly in the context of aviation and pilot teams with human and software agents. For example, Johnson et al. (2021) showed that coordination training between software agents and human pilots led to better adaptation in critical situations through higher communication anticipation. Brand and Schulte (2021) developed a workload-adaptive and task-specific cognitive agent for helicopter crews that adjusted support by identifying task situations and the workload of the crew. Liu et al. (2016) showed in a human–robot interaction that participants were highly sensitive to the anticipative adaptation of a robot while interacting with a human. Robots that adapted to human actions over time were preferred to work with over non-adaptive ones.

Reflective Adaptability. A reflective level of adaptability means that humans and robots can reflect on task performance after an action period, evaluate performance feedback, and (re-)plan subsequent action phase behavior. In work psychology, Rico et al. (2019) speak of adaptation through explicit coordination during a transition phase (i.e., between two action phases). Successful adaptation during transitions relies on a valid and shared situation awareness that feeds back functional and dysfunctional performance from the action phase. Moreover, successful adaptation in transition relies on explicit communication to reflect on prior achievements and plan future tactics (Ellwart et al., 2015). This level of adaptation places high interaction-related demands on HAT. On one side, sensors and user interfaces have to support human-autonomy reflection and on the other side, the systems software must be able to handle such tactical adjustments. For the maritime context, for example, humans would evaluate robot inspection performance, feedback about missing information, or mistrust of the robot which leads to adjustments in subsequent inspection phases. Probably because of the technical challenges, there is little research about reflective adaptation in HAT. Kox et al. (2021) investigated trust repair strategies between robots and humans during transition phases. When the robot failed its job the system feeds back expressions of regret and explanations, which resulted in high trust repair.

One type of reactive or reflective adaptation is the concept of adaptive LOA. This means that formerly autonomous actions of the robot become manually controlled (or vice versa) depending on the task or team characteristics. HAT may adapt the LOA of the robot or software agents depending on system errors (Chavaillaz et al., 2016) or the workload of the human (Calhoun et al., 2011). Adaptive LOA may be implemented automatically during action (i.e., reactive) or after task reflection on demand by the human team member. In this vein, the concept of socio-digital self-comparisons may be relevant for future research. When humans compare their task-related competences with robots, Ellwart et al. (2022) found that perceived advantages of robot competences (compared to own individual competences) were related to task allocation toward the robot. Thus, adaptive LOA may also impact the evaluation of own and robot competences in a given situation.

Long-term Applicability and Strategic Adaptability. While reactive and reflective adaptation focus on short-term adjustments of HAT during a given sequence of action and transition phases, there is a long-term perspective on the applicability and adaptability of HAT. Field interviews in the maritime sector of ship inspections within the BUGWRIGHT2 project pointed toward long-term issues that are closely related to user acceptance and knowledge needs before implementation. For example, inspectors of ship hulls asked if the autonomous system can be used sustainably for a long time without any loss in quality and performance. This relates to technical reliability after years of application but also to the question if the system will fit the demands of the future. Thus, systems need to strategically adapt to new changing conditions, such as new ship types, inspection or software regulations as well as new workflows. To successfully implement these adaptations, close cooperation between members of HAT and system developers is required not only in the phase of technology introduction but also in the long term over the life cycle of the HAT.

4 Conclusion

From a psychological perspective, the collaboration between humans and self-governing systems can be described as a complex interaction of numerous factors at the level of human, technology, and organization. The robot must no longer be just a tool, but an autonomous team member in HAT. The resulting requirements for the design of maritime HAT can be developed in an interdisciplinary collaboration between work psychologists, system developers, and end-users in a participatory manner. Yet, there is no optimal design solution. In this context, well-researched interaction processes, as well as cognitive and emotional states of psychological models, can provide a frame of reference to design functional and adaptive systems. Thereby, the specific task must be at the center of system design. It makes a difference if robots gather data for ship hull inspections autonomously and give this information to a human inspector for decision or if robots gather data and decide about the seaworthiness of the ship and the hull’s safety autonomously. The optimal design solution is always bound to the specific task and thus opens up a wide range of application perspectives for HAT in the maritime sector.