1 Introduction

Modern sociotechnical systems are characterised by high structural and behavioural complexities. They often consist of a large number of heterogeneous components with complex properties and nonlinear interaction between these components. Multiagent systems (MAS) have proven to be a suitable paradigm to model the dynamics of such systems (see e.g. [1]).

There is broad consensus that situation awareness (SA) plays a key role in agent-based modelling of complex sociotechnical systems. In the literature there are different views on what SA is and how it could be modelled. More specifically, one school of research considers SA as the process of gaining awareness [2], a second school refers to it as to the product of gaining awareness [3], whereas a third school sees SA as a combination of the process and product. Representatives of the third school take an ecological approach and describe SA as a ‘generative process of knowledge creation and informed action taking’, e.g. Smith and Hancock [4]. According to their view, one’s interaction with the world is directed by internally held mental models. The outcome of interaction modifies these mental models, which directs further exploration. Support for the view promoted by the third school also becomes clear in a series of studies of conflicts between multiple agents [5]. Conflicts are an imminent part of the dynamics of sociotechnical systems. Furthermore, conflicts between beliefs and goals of agents are common in intra- and intergroup dynamics in a MAS. As argued in [5], conflicts may occur as mere differences or contradictions, but also as social conflicts. Hence conflicts are identified as an essential part of a MAS that captures complex sociotechnical system behaviour.

In order to integrate SA in a multiagent model of a sociotechnical system, the framework of Endsley [3] is often taken as a starting point. Following Endsley’s definition [3], situation awareness refers to the level of awareness that an individual has of a situation; to an operator’s dynamic understanding of ‘what is going on’. The SA model proposed by Endsley is based on human information processing theories and comprises three levels: Level 1 involves perceiving by an individual the state, attributes, and dynamics of task-related elements in the surrounding environment. At Level 2, data perceived at Level 1 are being interpreted and understood in relation to the individual’s task and goals. At Level 3, the individual predicts future states of the systems and elements in the environment based on their current state. Endsley and Jones [6] extend the original SA model of Endsley to shared SA and introduce differences in SA between multiple human agents in a sociotechnical environment. In Endsley and Jones [7], this model is used in shared SA requirements analysis for the design of sociotechnical systems.

Typically, in agent-based models based on Endsley’s SA model, individual actors are considered at the basic level as isolated information processing entities, e.g. [810], and as required, social abilities of and interaction between actors are built on top of such individualistic models. However, in the area of agent-based modelling of sociotechnical systems it has been recognised that classical individualistic models of agents (e.g., based on the Belief Desire-Intention framework) are not able to capture many aspects of social dynamics Following Dignum et al. [11], humans are in the core social beings, and thus social aspects should be addressed not as an addition, but at the core of any model that involves interaction between agents. Similar arguments are recently made for robotic systems [12]. Such a paradigmatic shift of view on agent-based modelling of sociotechnical systems calls for novel models of SA and SA relations between agents at their core.

In order to make progress in this challenging and divided domain of research, in this paper we develop a mathematical framework for modelling and analysis of multiagent SA (MA-SA) which is based on MA-SA relations in a system of multiple agents. For this development we take advantage of the insight gained by applying the MA-SA model of Stroeve et al. [13] to agent-based safety risk analysis in air traffic management [14]. However, the development in the current paper is different. The MA-SA model in [13] extended the model of Endsley [3] by incorporating non-human agents, whereas the current paper uses the framework of Endsley and Jones [6] as a starting point to also capture MA-SA relations and shared MA-SA between multiple human agents in a sociotechnical system.

The SA definition provided in [6] implicitly considers human agents only, whereas the MA-SA framework developed in this paper also includes non-human agents This provides the basis for a subsequent development of a series of complementary extensions:

  • MA-SA relations between two agents may be asymmetric, i.e. agent A may maintain SA about certain state elements of agent B, while agent B maintains SA about no or other state elements of agent A. Moreover, following Gerran’s [15] Theory of Mind, MA-SA relations may involve more than two humans e.g. human agent A may maintain SA about the SA maintained by human agent B about human agent C.

  • MA-SA in a MAS is defined through MA-SA relations. This also applies to MA-SA differences and shared MA-SA. The MA-SA relations support a systematic approach in differentiation between self-awareness, SA about another agent, and SA about non-agent entities.

  • The MA-SA update processes at the three levels of Endsley are made more specific in terms of: Observation or Messaging at level 1, Interpretation at level 2 and Projection at level 3.

  • A distinction is made between MA-SA differences that are known to exist, and MA-SA differences that are unknown to exist; the latter are referred to as MA-SA inconsistencies.

The paper is organised as follows. In Section 2 we give a formal presentation of the SA framework of [6] for a sociotechnical system containing N human operators. The SA relations in this framework are defined through design requirements on sharing SA. Section 3 introduces and elaborates a novel MA-SA relationship for a system of N agents. Section 4 characterizes the MA-SA update processes in a MAS. Section 5 distinguishes MA-SA differences that are known to exist from those that are unknown to exist. Section 6 illustrates the application of the novel framework to the Überlingen mid-air collision accident. Section 7 provides concluding remarks.

2 SA framework of Endsley and Jones

We consider a sociotechnical system containing N human operators H i , i = 1,…, N, amidst an environment of multiple non-human entities that all together are represented by H 0. At moment t, H i , i = 1,…,N has SA σ t, i , which is a finite set of multi-dimensional stochastic processes, each of which has realizations in a well defined state space. Endsley and Jones [6] assume that each pair of human operators has certain requirements regarding the similarity of their SAs. In order to capture this during the design of a sociotechnical system, Endsley and Jones define SA requirements for team members: “SA requirements are those SA elements that need to be shared between team members”. Subsequently, Endsley and Jones define “Shared SA is the degree to which team members have the same SA on shared SA requirements”. Shared SA requirements, according to Endsley and Jones [7] may concern data (e.g., about a system, other team members), comprehension (e.g., of status relevant to own or other’s goals, of impact of own actions on others and of actions of others on self) and projection (e.g., of actions of team members). Shared SA between two humans is the degree to which the SA elements of their shared SA requirements are equal, with fully shared SA having the highest degree.

In order to formalize this, denote by R i, k the set of SA elements that have to be shared between humans H k and H i , ik. Formally, we define R i, k as a set of N i, k different pairs (s, r) j , j ∈ [1, N i, k ], where s points to the s-th element of σ t, i , which is denoted as σ t, i (s), and r points to the r-th element of σ t, k , which is denoted as σ t, k (r). Then, fully shared SA between humans H k and H i , ik, applies if the SA elements in the set of their shared SA requirements R i, k are equal. More precisely, humans H k and H i have fully shared SA if

$$ \sigma_{t,i}(s)=\sigma_{t, k} (r), \,\, \forall (s,r) \in R_{i,k}. $$
(1)

If similar conditions are satisfied for all other pairs of humans, then all humans in the sociotechnical system considered have fully shared SA.

For example, in an air traffic context, a pilot and an air traffic controller need to share information about the location of the pilot’s aircraft. Assume the aircraft locations are maintained by the pilot and the controller as SA elements σ t, p i l o t (s) and σ t, c o n t r o l l e r (r) respectively. Then the pair (s,r) will be in the set R p i l o t, c o n t r o l l e r of SA elements that have to be shared between the pilot and the controller. If in this example σ t, p i l o t (s) = σ t, c o n t r o l l e r (r), then the pilot and the controller share information about the location of the pilot’s aircraft. However if σ t, p i l o t (r) ≠ σ t, c o n t r o l l e r (s) then there is an SA difference between the pilot and the controller. Similarly, there may be SA sharing or an SA difference between the controller and the pilot of another aircraft.

Following Endsley and Jones [6], if all humans involved have the same but erroneous SA about their environment H 0 of non-human entities, then the conditions of fully shared SA between all humans in the sociotechnical system are still satisfied. In the above example, this means that σ t, p i l o t (s) = σ t, c o n t r o l l e r (r), while this does not exclude the possibility that both SA’s about the location of the aircraft differ from the true aircraft location. This example shows that it is worthwhile to include non-human entities in the framework of [6].

Having formalized the Endsley and Jones model of shared SA and SA difference for a collection of humans in a sociotechnical system, our next step is to introduce a similar but different relationship formalism for a system of multiple agents that need not be human.

3 Novel MA-SA framework for a system of N agents

In contrast to Section 2, where we formalized Endsley and Jones’s [6, 7] SA design requirements, this section aims to formalize the SA relations that are maintained in a MAS. Because a MAS may involve different types of agents, these MA-SA relations are not symmetrical, e.g. agent A may maintain SA about agent B, but not the opposite.

3.1 MA-SA relations in a system of N agents

We consider a MAS consisting of N agents A i , i = 1,…,N and a set A 0 of non-agent entities that are in the environment of these N agents. In the MAS domain, reactive and proactive behaviours of agents are often distinguished. Reactive behaviour is a simple, event-driven ‘stimulus-response’ type of behaviour. Proactive behaviour refers to a more complex, goal- or motive-driven behavioural type, including adaptation.

We assume that at moment t, A i has state x t, i , i = 0,…,N. The state x t, i of an agent A i may have multiple state elements. Note that in this section we do not yet make any assumption on which elements of x t, i are SA elements and which are not. Agent A i may maintain state elements about other state elements of itself, of other agents A k , ki, or of A 0. To capture such relations between state elements of different agents, we denote by \(S_{i}^{k}\) the multiagent situation awareness (MA-SA) relation of agent A i regarding agent A k . Similarly as R i, k in Section 2, \(S_{i}^{k}\) is a set of \({N_{i}^{k}} \) different pairs \((s,r)_{j}, j\in [1,{N_{i}^{k}}],\) where s points to state element x t, i (s) and r points to state element x t, k (r). Footnote 1

To illustrate the difference between \(S_{i}^{k}\) and R i, k we consider the pilot-controller example of Section 2, where both the pilot and the controller maintain SA about the location of the pilot’s aircraft. In a MAS setting this means there are three agents: the pilot (agent 1), the controller (agent 2) and the pilot’s aircraft (agent 3). Each of these agents has a state vector, i.e. x t, p i l o t , x t, c o n t r o l l e r and x t, a i r c r a f t . Let’s assume that the aircraft location elements in these state vectors are: s for the pilot, r for the controller and q for the aircraft. Then the pair (s, q) is in the set \(S_{pilot}^{aircraft} \) and the pair (r, q) is in the set \(S_{controller}^{aircraft} \). However, normally the pairs (s, r) and (r, s) are not in the sets \(S_{pilot}^{controller} \)and \(S_{controller}^{pilot} \), respectively, even if (s, r) is in the set R p i l o t, c o n t r o l l e r . Only in exceptional situations, the pair (r, s) may be in the set \(S_{controller}^{pilot} \); for example if the controller has reason to believe that the pilot has an erroneous SA regarding the location of his or her aircraft.

Remark 1

Because \(S_{i}^{k}\) explicitly belongs to A i , a logical assumption is that the MA-SA relation \(S_{i}^{k}\) is known to agent A i , i.e., \(S_{i}^{k}\) is represented by one or more elements of state x t, i of agent A i . In line with this, the MA-SA relation \({S_{i}^{k}} \) may vary over time. Nevertheless, for notational simplicity we assume that \(S_{i}^{k}\) is time-invariant.

3.2 Special cases

For proactive agents and for non-agent A 0, special cases apply. In particular, proactive agents may have a self-awareness relation \(S_{i}^{i}\). If the pair (s, r) is in the set \(S_{i}^{i}\), then state element x t, i (s) is agent A i ’s self-awareness about its own state element x t, i (r). This also means that the opposite pair (r, s) is not in the set \(S_{i}^{i}\).

For non-agent A 0, the specialty rather is that none of its state elements maintains SA. Hence, the set \(S_{0}^{k}\) is empty. Of course, in general, the opposite MA-SA relation \(S_{k}^{0}\) will not be an empty set, i.e. agent A k may maintain SA about one or more entities in A 0. This means that typically there will be an asymmetry between \(S_{k}^{0}\) and \(S_{0}^{k}\), i.e. (s, r) may be in \(S_{k}^{0}\) while (r, s) is not in \(S_{0}^{k}\). Such asymmetry may also apply to any pair of agents, i.e. in general \(S_{k}^{i}\neq S_{i}^{k}, k \neq i\).

In the following special cases the situation is considered in which MA-SA relation \({S_{i}^{k}} \) contains partly overlapping pairs. Two kinds of overlap are possible: 1) (s, r) and (s, r ), r r are both in \({S_{i}^{k}} \); and 2) (sr) and (s ,r), s s are both in \({S_{i}^{k}} \). In case 2), both x t, i (s)and x t, i (s ) form SA of x t, k (r). If agent A i assures that the two are always the same, then one of the two can be deleted. However, if agent A i would fail to maintain equality, then this could lead to ambiguity.

In case 1), x t, i (s) is the SA of both x t, k (r) and x t, k (r ). Because agent A k is in control over x t, k (r) and x t, k (r ), it may happen that these two differ, i.e. x t, k (r) ≠ x t, k (r ). In such a case there may be ambiguity for SA x t, i (s) of agent A i . In order to avoid the above types of ambiguities the proposed MA-SA framework does not allow any partial overlap of pairs in MA-SA relations: If (\(s,r)\in {S_{i}^{k}} \), then neither (s, r ), r r, nor (s ,r), s s, are in \({S_{i}^{k}}\).

Remark 2

An open question is if it would make sense to relax the above assumption, for example to allow that agent A i maintains SA about some composite state elements of another agent, or to allow that agent A i maintains a composite SA about one state element of another agent.

For the pilot-controller example in Section 3.1, the MA-SA relation framework allows that the controller maintains SA about what the pilot maintains as SA about the location of its aircraft. Such type of reasoning is often considered in Theory of Mind [15], also for a depth of more than two levels. The proposed framework also supports any depth of reasoning. For this MA-SA, relations have to be concatenated. For example, if MA-SA relation \(S_{i}^{k}\) of agent A i has an element (s, r) and MA-SA relation \(S_{k}^{j}\) of agent A k has an element (r, q), then concatenation of \(S_{i}^{k}\) and \(S_{k}^{j}\) yields: x t, i (s) is the SA of agent A i about x t, k (r), which is the SA of agent A k about state element x t, k (q) of A j .

3.3 MA-SA in a system of N agents

Having defined i) the MAS, ii) the state of each agent and of non-agent entities, and iii) the MA-SA relations between state elements, we are prepared to identify which elements of state x t, i are SA elements and which are not. We denote by \(\sigma _{t,i}^{k}\) the SA of agent A i at moment t about the state of agent A k . This defines \(\sigma _{t,i}^{k}\) as the set of states x t, i (s) of agent A i for which there is a MA-SA relation with state elements of agent A k , i.e.:

$$ \sigma_{t,i}^{k} \overset{\triangle}{=}\{x_{t,i} (s), \exists r \text{s.t.} (s,r)\in {S_{i}^{k}} \} $$
(2)

If set \(S_{i}^{k}\) is non-empty, then \(\sigma _{t,i}^{k}\) is non-empty, and we say “Agent A i maintains SA about A k ”.

Similarly, by setting k = 1, (2) defines the self-awareness \(\sigma _{t,i}^{i}\) of agent A i at moment t. In addition to the self-awareness \(\sigma _{t,i}^{i}\) and the MA-SA components \(\sigma _{t,i}^{i}, k \neq i\), state x t, i may contain state elements that are not related to any other state element through \(\left \{S_{i}^{k},k=1,\ldots ,N\right \}\). These elements of x t, i define the base state ξ t, i of A i , i.e.

$$ \xi_{t,i} \overset{\triangle}{=}\{x_{t,i} (s),\text{ s.t. } (s,r)\notin {S_{i}^{k}} \text{ for } \forall (k,r)\} $$
(3)

As a consequence of (2)-(3), it follows that the state x t, i of A i consists of base state ξ t, i , self-awareness \(\sigma _{t,i}^{i}\), and SA \(\sigma _{t,i}^{k},k\neq i\), of all other agents, i.e.

$$ x_{t,i} =\xi_{t,i} \cup \sigma_{t,i}^{i} \bigcup\limits_{k\ne i} {\sigma_{t,i}^{k}} $$
(4)

Remark 3

If, for some s, state element x t, i (s) of agent A i makes part of the base state ξ t, i , then this does not exclude the possibility that another agent A k , ki, maintains SA about this base state element x t, i (s) of agent A i .

Remark 4

We can use the MA-SA relations to collect those state elements of A i for which SA is maintained by any of the other agents; this is the following set: \(\left \{x_{t,i}(r),\exists s \text {s.t.} (s,r)\in S_{k}^{i}\,\, \text {for some} \,k \in [1,N]\right \}\)

4 MA-SA updating in a MAS

The aim of this section is to express agent A i ’s situation assessment process of its environment at the three levels of Endsley [3]. First, Section 4.1 addresses the updating of an agent’s SA about its environment, i.e. \(\sigma _{t,i}^{k}, k\ne i\), and how this relates to Endsley’s levels 1 and 2. Next, Section 4.2 addresses the updating of the agent’s other state components, i.e. ξ t, i and \(\sigma _{t,i}^{i}\), and how this relates to Endsley’s level 3.

4.1 Updating of an agent’s SA about its environment

Each agent A i in a MAS determines its own moment in time at which an update is made of its SA \(\sigma _{t,i}^{k}\) about A k , ki. Just before such moment t the SA of agent A i about agent A k is \(\sigma _{t-,i}^{k}\).Footnote 2 As a consequence of the update at moment t, the SA of agent A i about agent A k becomes \(\sigma _{t,i}^{k}\). Within a MAS, such update of a subset of the state of agent A i is some function f i, k of the states of agents A i and A k just before the update. This can be expressed through the following equation: \(\sigma _{t,i}^{k} =f_{i,k} (x_{t-,i} ,x_{t-,k})\). Obviously the specific form of the function f i, k depends of the MAS model for agent A i and its interactions with agent A k . Also the time moment t will be determined by the MAS model for agent A i on the basis of its own state and the possible activity by another agent.

In practice, typically there are all kinds of uncertainties involved when applying such a function f i, k . In order to capture such uncertainties we enter some random term ε t, i, k in the latter equation, which yields an overall MA-SA update equation:

$$ \sigma_{t,i}^{k} =f_{i,k} (x_{t-,i} ,x_{t-,k} ,\varepsilon_{t,i,k} ) $$
(5)

where ε t, i, k represents possible errors or uncertainty that may play a role in updating the SA of agent A i about agent A k .

In order to make MA-SA update (5) more specific, next we characterize it through three more specific update equations, each of which can be linked to one of the first two levels of [3]. These three equations are for:

  1. a.

    Observation, by agent A i about the state of agent A k ;

  2. b.

    Messaging, received by agent A i from agent A k ; and

  3. c.

    Interpretation, by agent A i of an Observation or a Message.

An update of \(\sigma _{t,i}^{k}\) based on an observation of state elements of A k is represented by a combination of the following Observation and Interpretation equations:

$$ y_{t,i}^{k} =f_{i,k}^{observation} \left(x_{t-,i}, x_{t-,k}, \varepsilon_{t,i,k}^{observation}\right) $$
(6)
$$ \sigma_{t,i}^{k} =f_{i,k}^{interpretation} \left(x_{t-,i} ,y_{t,i}^{k}, \varepsilon_{t,i,k}^{interpretation} \right) $$
(7)

where \(f_{i,k}^{observation}(.)\) is an observation function \(f_{i,k}^{interpretation}(.)\) is an interpretation function, and \(\varepsilon _{t,i,k}^{observation}\) and \(\varepsilon _{t,i,k}^{interpretation}\) represent potential observation and interpretation errors respectively.

Observation (6) provides a measurement of x t−,k from the perspective of A is state x t−,i . This coincides quite well with Endsley’s level 1 of perception by an individual of the state, attributes, and dynamics of task-related elements in the surrounding environment. Subsequently, Interpretation (7) uses this measurement and the state of agent A i to update the SA of agent A i about agent A k . The latter coincides quite well with Endsley’s level 2 of interpretation and understanding of a new observation in relation to the individual’s task and goals.

In order to verify that the Observation and Interpretation combination yields an equation of type (5), we substitute (7) into (6), which yields:

$$\begin{array}{@{}rcl@{}} \sigma_{t,i}^{k} =f_{i,k}^{interpretation} \left(\vphantom{\varepsilon_{t,i,k}^{interpretation}} x_{t-,i} ,{\kern11.8pc}\right.\\ f_{i,k}^{observation}\left.\left(x_{t-,i}, x_{t-,k}, \varepsilon_{t,i,k}^{observation} \right),\varepsilon_{t,i,k}^{interpretation} \right){\kern12pt} \end{array} $$

The latter implies that \(\sigma _{t,i}^{k} \) can be written as a function of x t−,i , x t−,k and a random error, such as in (5).

For a received message from agent A k , a set of equations applies that is similar to (6,7), i.e.

$$ z_{t,i}^{k} =f_{i,k}^{message} \left(x_{t-,i} ,x_{t-,k} ,\varepsilon_{t,i,k}^{message}\right) $$
(8)
$$ \sigma_{t,i}^{k} =f_{i,k}^{interpretation} \left(x_{t-,i}, z_{t,i}^{k}, \varepsilon_{t,i,k}^{interpretation}\right) $$
(9)

with messaging function \(f_{i,k}^{message}(.)\), interpretation function \(f_{i,k}^{interpretation}(.)\), and \(\varepsilon _{t,i,k}^{message}\) and \(\varepsilon _{t,i,k}^{interpretation}\) representing potential observation and interpretation errors respectively. Similar to Observation (6), Messaging (8) provides a kind of measurement of A k ’s state x t−,k from the perspective of A i ’s state x t−,i , and therefore also fits quite well at Endsley’s level 1. Subsequently, Interpretation (9) uses this measurement and the state of agent A i to update the SA of agent A i about agent A k .

4.2 Projection equation at Endsley’s level 3

An interpretation update according to (7) or (9) typically triggers a projection type of update of agent A i ’s base state ξ t, i and self-awareness \(\sigma _{t,i}^{i}\). The resulting outcome of such projection update is ξ t+,i and \(\sigma _{t+,i}^{i}\) Footnote 3 respectively, which is captured through the following projection equation:

$$ \left(\xi_{t+,i} ,\sigma_{t+,i}^{i} \right)=f_{i}^{projection} \left(x_{t,i} ,\varepsilon_{t,i}^{projection}\right) $$
(10)

where \(f_{i}^{projection} (.)\) is a projection function and \(\varepsilon _{t,i}^{projection}\) represents possible error in the projection process.

Projection (10) incorporates two coupled updates:

  • Update of agent A i ’s base state ξ t, i through reasoning at Endsley’s level 3;

  • Update of agent A i ’s self-awareness \(\sigma _{t,i}^{i}\).

The reasoning at Endsley’s [3] level 3 addresses the prediction, i.e. significantly beyond the current time t, of future states of the other agents and non-agent entities in the environment of agent A i as well as making novel plans for itself. All these novel predictions and plans form elements in the base state ξ t+,i . Simultaneously with the updating of these predictions and plans, agent A i ’s self-awareness \(\sigma _{t+,i}^{i}\) is also updated, for example to maintain self-awareness of agent A i ’s task load that is involved with the updated predictions and plan.

Typically as a result of the updates of agent A i ’s base state to ξ t+,i agent A i will send one or more messages to one or more other agents. Subsequently this may trigger SA updates by these other agents.

5 MA-SA differences and MA-SA inconsistencies

We say that agent A i has correct SA about A k iff the following equation holds true:

$$ x_{t,i}(s)=x_{t,k}(r),\forall (s,r)\in S_{i}^{k} $$
(11)

By analogy with Endsley and Jones [6], we say that shared MA-SA between agents A k and A i , ik, is the degree to which the pairs of state elements that correspond to the sets \(S_{i}^{k}\) and \(S_{k}^{i}\) are equal. Hence, agents A k and A i are said to have fully shared SA iff both (11) and the following hold true:

$$ x_{t,k}(s)=x_{t,i}(r), \forall(s,r)\in S_{k}^{i} $$
(12)

If similar conditions are satisfied for all other pairs of agents in the MAS, then all agents in the MAS are said to have fully shared MA-SA. We say that agents A i and A k have a fully shared and correct MA-SA, iff in addition to (11, 12) the following equations are satisfied:

$$ x_{t,k}(s)=x_{t,0}(r), \forall(s,r)\in S_{k}^{0} $$
(13.a)
$$ x_{t,k}(s)=x_{t,0}(r), \forall(s,r)\in S_{i}^{0} $$
(13.b)

In case there is a pair of agents for which (11,12) do not hold true, then we say there is MA-SA difference among agents in the MAS. For example, if there is an \((s,r)\in S_{t,i}^{k} \) for which x t, i (s) ≠ x t, k (r) then this means that the SA of agent A i differs from the corresponding state element x t, k (r) of agent A k .

As has been well explained in [5] there are various types of differences. From a safety perspective, an important distinction is whether a difference is known or unknown. We illustrate this distinction for the pilot-controller example of Section 3.1. Assume the pilot’s awareness about the position ν of its aircraft is according to a belief measureFootnote 4 with support on the interval \([\bar {\nu }_{1} -\in _{1} ,\bar {\nu }_{1} +\in _{1}]\). Similarly assume that the controller’s awareness about the position ν of this aircraft is according to a belief measure with support on the interval \([\bar {\nu }_{2} -\in _{2} ,\bar {\nu }_{2} +\in _{2}]\). Unless \(\bar {\nu }_{1} =\bar {{\nu } }_{2} =\nu \) and ∈1=∈2=0, there are differences between each of these three SA’s. However, the difference in the SA of the pilot about its aircraft’s position is often known; it is an unknown SA difference iff \(\left |{\bar {\nu }_{1} -\nu } \right |>\in _{1} \). Similarly, the difference in the SA of the controller about this aircraft position is unknown iff \(\left | {\bar {\nu }_{2} -\nu } \right |>\in _{2}\).

In order to capture this idea of unknown SA difference we introduce the concept of MA-SA consistency. We say there is MA-SA consistency of agent A i regarding A k iff

$$ Support\{x_{t,i}(s)\}\supseteq Support\{x_{t,k}(r)\}, \forall(s,r)\in S_{i}^{k} $$
(14)

where S u p p o r t{x t, i (s)} refers to the mathematical support (the set of values having a non-zero belief measure) of state element x t, i (s) . Application of (14) to the pilot/controller example above implies that MA-SA of the pilot about the aircraft position is inconsistent if \(\left |{\bar {\nu }_{1}-\nu } \right |>\in _{1}\), and the MA-SA of the controller about the aircraft position is inconsistent if \(\left |{\bar {\nu }_{2} -\nu } \right |>\in _{2}\)

If there is MA-SA consistency of agent A i regarding agent A k and there also is MA-SA consistency of agent A k regarding agent A i , then we say there is MA-SA consistency between agents A k and A i . Finally, if (14) holds true for each combination of (i, k) with i ≠ 0 and ki, i.e. including k = 0, then we say there is full MA-SA consistency in the MAS.

6 Case Study

To illustrate the MA-SA modelling framework, we perform a retrospective analysis of the Überlingen mid-air collision — one of the most serious accidents in aviation history. It occurred on the 1 st of July 2002 between a Tupolev TU154M passenger jet and a Boeing 757-200 cargo jet over the towns of Überlingen and Owingen in southern Germany. In the official accident investigation report [16] the following immediate causes of the accident were identified:

  • The imminent separation infringement was not noticed by the air traffic controller on duty in time. The instruction for the TU154M to descend was given at a time when the prescribed separation to the B757-200 could not be ensured anymore.

  • The TU154M crew followed the air traffic controller’s instruction to descend and continued to do so even after the onboard traffic collision avoidance system (TCAS) advised them to climb.

Among the causes, which were identified by the official investigation are the following two: i) the integration of new TCAS systems into the aviation system was insufficient; and ii) during the night, workstations were not continuously staffed by controllers. For a more detailed description of the accident and its investigation we refer to [16].

In the following sections we shall model this case using the newly developed MA-SA framework. In Section 6.1 we identify the relevant agents and their relevant states. Relevant MA-SA relations are described in Section 6.2. In Section 6.3 MA-SA differences and their evolution and propagation through MA-SA update processes are described. Section 6.4 summarizes the novel aspects of the framework in this case study.

6.1 Relevant agents and their state elements

We start the modelling of the case study with the identification of agents and the formalization of their states. The agents and non-agent entities that played a relevant role in the Überlingen accident are identified in Table 1 below.

Table 1 Agents in the Überlingen accident considered in the case study

To formalize the states of agents in air traffic management, Stroeve et al. [13] distinguish the following state components:

  • the identity of an agent (e.g., the callsign of an aircraft, the organisational role of a human such as that of a supervisor controller);

  • a discrete state (mode) of an agent (e.g., an alert mode of a technical system, a cognitive mode of a pilot);

  • a continuous state of an agent (e.g., the speed of an aircraft, the workload of an ATCo);

  • an intent of an agent – a plan to be followed by an agent, which is a time-indexed sequence of discrete and continuous states to be executed in the future (e.g., a time-indexed taxiing route).

For our case study, the values for these state elements of the agents were identified based on the investigation report [16], also taking into account manuals and regulations, prescribing rules of execution of operations in ATM. In accordance with the proposed theoretical framework, the states of the agents comprise base state, self-awareness and SA components. The state vectors of the aircraft in our study consist of base state components only. The aircraft are considered to be reactive agents; they neither have SA about the other agents, nor have self-awareness. The identified state elements for agents TU154, TU154 crew, and ATCo are provided in Table 2. The state elements of agents B757 and B757 crew are defined in the same way. The identified state elements for the other relevant agents are provided in Table 3; their sets of state elements are rather limited.

Table 2 State elements of agents TU154, TU154 crew, and ATCo, denoted by their indexes
Table 3 State elements of the other relevant agents

6.2 MA-SA relations

Table 4 shows the MA-SA relations identified between the agents in the case study. Most of the relations described in the table concern TU154 and TU154 crew agents. The relations for B757 and B757 crew agents are defined in a similar way. Note that the state properties referred to in the MA-SA relations are specified using the same state language and the same ontology. Therefore, as indicated in Section 3, the MA-SA relation elements in Table 4 are identified by the corresponding state properties.

Table 4 MA-SA relations of ATCo and TU154 crew regarding TU154, ATCo and B757 crew; and of ATCo, TU154 crew and B757 crew regarding TU154

In addition, the following MA-SA relations were identified involving other agents:

\(S_{TCAS-TU}^{TU154crew}\)::

TCAS-TU alert, TCAS terms of use

\(S_{STCA}^{ATCo}\)::

STCA’s mode, STCA’s alert

\(S_{ATCo}^{ATC-K}\)::

Conflict B757-TU154

Furthermore, the following base state components were identified for agents TU154 crew and B757 crew, i.e. these state components are no components of an SA about state components of another agent:

ξ t, T U154 c r e w : intent of TU154 crew

ξ t, B757 c r e w : intent of B757 crew

Development of the conflict, which led to the accident, can be explained in terms of the development and propagation of MA-SA differences through MA-SA update processes. In the following, these processes are indicated as (O) for observation, (M) for messaging, (I) for interpretation and (P) for projection. In order to keep the elaboration of the MA-SA updating limited, we shall focus on the key SA differences and SA updates in the development of the conflict during the last 5 minutes before the mid-air collision happened.

6.3 Development and propagation of MA-SA differences

Figure 1 illustrates the development and propagation of the MA-SA differences for the case study, which we identify through the analysis below.

Fig. 1
figure 1

Development and propagation of MA-SA differences in the case study based on a sequence of events and MA-SA update processes (horizontal axis at the top of the figure). Initially, there are MA-SA inconsistencies (unknown MA-SA differences) because the crews of both aircraft were not aware of the presence of each other’s aircraft, and the ATCo was not aware of the conflict. Then the ATCo noticed the conflict and provided instructions, as a result of which the MA-SA inconsistencies step by step either were resolved or changed into known differences. However, before these known differences were resolved, TCAS alerts created various novel inconsistencies. Unfortunately, the timely recognition of these inconsistencies did not happen, and there was no lucky miss between the aircraft either

In our MA-SA analysis we take time point 21:30:11 as the starting point. At this time point, the TU154 and the B757 aircraft were at the same flight level and approached each other at right angles; the distance between them was 64 NM. However, the ATCo did not notice this, as a result of his erroneous observation (O) and interpretation (I) process regarding one or both aircraft. Furthermore, at that time point, neither of the crews knew about the existence of the other aircraft. Thus, there was an SA inconsistency of the TU154 crew regarding the B757 and an SA inconsistency of the B757 crew regarding the TU154, w.r.t. all MA-SA relations connecting the two aircraft and their crews.

Both crews detected (O) the other aircraft later, but before the infringement had occurred; however, they could not identify (I) their altitudes, flight directions and air speeds accurately because of the high altitudes and darkness. Thus, the differences in the corresponding MA-SA relations were not eliminated, but the SA inconsistencies were resolved.

The ATCo detected (O, I) the developing conflict when the horizontal separation between the aircraft was already below 5 NM (21:34:49). Then, the ATCo advised (M) the TU154 crew to descend, which was acknowledged by the crew (I, M). Thus, both the intent state of the T154 crew and the SA of the ATCo about the intent state of the T154 crew were updated in the same way, i.e., there was no MA-SA difference in

\(S_{ATCo}^{TU154\,crew}\): Intent of TU154 crew.

However, the B757 crew was not aware of the T154 crew’s intent, as it was not communicated (M) to them, neither by the ATCo, nor by the T154 crew. Thus, there was an SA inconsistency of the B757 crew regarding the TU154 crew w.r.t. state ‘intent of TU154 crew’ and an SA inconsistency of the B757 crew regarding the ATCo w.r.t. state element ‘intent of TU 154 crew’.

Furthermore, both crews were still not aware of the developing conflict (infringement), as it was not clearly communicated (M) by the ATCo, and they themselves were not able to observe (O) it.

The crews of the aircraft became aware of the developing conflict only when their corresponding TCASs issued resolution advisories at 21:34:56 (M, I). At that time, the crews’ SAs were updated through messaging and interpretation.

The TCAS of the TU154 provided (M) to its crew the advisory to climb, whereas the TCAS of the B757 provided (M) to its crew the advisory to descend. However the crews were not aware of each others’ TCAS advisories.

The flight operation manual prescribes that pilots must comply with TCAS instructions. However, the TU154 crew was not aware of this (I), i.e., there was an SA inconsistency of the TU154 crew regarding TCAS-TU w.r.t. state element ‘TCAS terms of use’.

Because of this, the TU154 crew continued following the ATCo’s instructions, which contradicted the TCAS-TU resolution advisory. Thus, there was no change in the TU154 crew’s intent because of the error in the projection process (P).

On the contrary, the B757 crew decided (P) to follow the TCAS advisory and started to descend (21:35:19), however this information reached (M, I) neither the ATCo, nor the TU154 crew due to an error of omission in the messaging or interpretation updating process. Thus, there existed an SA inconsistency of the ATCo regarding the B757 and an SA inconsistency of the TU154 crew regarding the B757; both inconsistencies were w.r.t. state element ‘flight mode of B757’ and ‘altitude of B757’.

Also the intent of the B757 crew was not known to the ATCo (M) and the TU154 crew (M), meaning that there were SA inconsistencies of the ATCo regarding the B757 crew and of the TU154 crew regarding the B757 crew; both inconsistencies were w.r.t. state ‘intent of B757 crew’.

Thus, the ATCo was not aware of the developing conflict.

At 21:35:00, the STCA system of ATCo, which had functioned in the aural mode, issued a conflict alert, which was not perceived by the ATCo (O), i.e., an SA inconsistency of the ATCo regarding STCA w.r.t. state element ‘STCA’s alert’.

The neighbouring air traffic control center ATC-K was aware of the conflict, but was not able to warn the ATCo (M) because of the malfunctioning phone system, i.e., an SA inconsistency of ATC-K regarding the ATCo w.r.t. state element ‘conflict B757-TU154’.

Therefore, the ATCo was not aware of the conflict until the accident happened at 21:35:32.

This example application shows that our newly developed MA-SA framework forms an effective way to structure a retrospective analysis of systemic behaviour behind an accident in a complex sociotechnical system.

6.4 Novel aspects of the framework in the case study

In this section we illustrate, by using the case study, the novel aspects of our proposed framework over the framework of Endsley and Jones [6]

Novel aspect 1: The SA definition provided by [6] implicitly considers human agents only. The MA-SA framework developed in this paper also includes non-human agents.

In the case study the following non-human agents were considered: aircraft TU154 and aircraft B757, the TCAS of these aircraft, and the STCA system

Novel aspect 2: MA-SA relations between two agents may be asymmetric, i.e. agent A may maintain SA about certain state elements of agent B, while agent B maintains SA about other state elements of agent A.

In the case study all MA-SA relations of type \(S_{i}^{j}\) where i is a human agent and j is a non-human agent are asymmetric. Furthermore relation \(S_{ATC-K}^{ATCo}\) is asymmetric too.

Novel aspect 3: Modelling to any depth the SA of one agent about the SA of another agent.

Consider an example of depth two. Let MA-SA relation \(S_{B757\,crew}^{ATCo}\) of agent B757 crew have an element (ATCo knows that B757 crew is aware of conflict B757-TU154, B757 crew is aware of conflict B757-TU154) and let MA-SA relation \(S_{ATCo}^{B757\,crew}\) of agent ATCo have an element (B757 crew is aware of conflict B757-TU154, there is a conflict B757-TU154). In such a way, agent B757 crew can reason about the ATCo’s knowledge about the B757 crew’s awareness of the conflict.

Novel aspect 4: A systematic approach in differentiation between base state, self-awareness, SA about another agent, and SA about non-agent entities.

For example, consider agent TU154 crew. Its base state is defined by ξ t, T U154 c r e w = {intent of TU154 crew at t} , its SA about human agent B757 crew is defined by \(\sigma _{t,TU154\,crew}^{B757\,crew}=\){intent of B757 crew at t}, its SA about non-human agent TCAS-TU is defined by \(\sigma _{t,TU154\,crew}^{TCAS-TU}=\){TCAS-TU alert at t}

Novel aspect 5: The MA-SA update processes at the three levels of Endsley [3] are made more specific for a MAS in terms of: Observation or Messaging at level 1, Interpretation at level 2 and Projection at level 3.

These MA-SA update processes are indicated throughout Section 6.3 by (O) for observation, (M) for messaging, (I) for interpretation and (P) for projection.

Novel aspect 6: A distinction is made between differences that are known to exist, and differences that are unknown to exist; the latter are referred to as MA-SA inconsistencies.

Figure 1 illustrates that severe safety problems typically start when a MA-SA inconsistency (= unknown MA-SA difference) sneaks in. Because such differences are unknown, they can stay and propagate unnoticed for some time in a MAS

7 Concluding remarks

In this paper, a formal framework has been developed for retrospective and prospective modelling and analysis of multiagent SA (MA-SA), which is based on multiagent SA relations in a system of multiple agents. In contrast to the existing agent-based models of SA (e.g. [810]), the proposed framework defines relations between agents at the most basic level and not as an addition on top of individualistic agent reasoning. Furthermore, SA is introduced as a set of stochastic processes, which cannot be represented by traditional epistemic and doxastic logic.

To develop the framework, first in Section 2 the Endsley and Jones model [6] for N humans in a sociotechnical system was captured in a formal setting. Based on this elaboration, a mathematically well-defined concept of SA relations between humans and shared SA was introduced. Next, in Section 3 this formalized concept of SA relations was extended to a multiagent system. This approach led to several extensions over Endsley and Jones [6]: i) the developed framework incorporates non-human agents; ii) MA-SA relations between two agents no longer need to be symmetric; iii) the MA-SA relation framework allows going in any depth to systematically capture the SA of one agent about the SA of another agent. In Section 4 it was shown that the MA-SA relation framework provides a systematic approach in differentiation between self-awareness, SA about another agent, and SA about non-agent entities Complementary to this, a formal characterization of MA-SA update processes in a MAS was provided at the three levels of Endsley [3]: Observation or Messaging at level 1, Interpretation at level 2 and Projection at level 3. Subsequently in Section 5, differences in MA-SA were defined relative to MA-SA relations between agents. Moreover a distinction was introduced between known and unknown differences and the latter were named MA-SA inconsistencies.

Finally, in Section 6, the newly developed formal framework was used to demonstrate a retrospective agent-based modelling of the Überlingen mid-air collision between two commercial transport jets. This example application demonstrates that the newly developed framework supports multiple views on SA considered in the literature. Although the model of Endsley and Jones [6] was used as a starting point, the support of the proposed framework is not limited to the product view on SA only. In particular, the proposed MA-SA update processes address the process view on SA [2] Moreover, it was shown that the framework can be used to specify the interplay between the process and product views, as the ecological SA approach prescribes [4].

During the development of the novel MA-SA framework, a few assumptions were adopted, such as the one that the MA-SA relations are non-composite and time-invariant. In follow-up research it will be studied how our newly developed mathematical framework can be extended to less restrictive conditions. In particular, such an extension would be useful to enable agents with abilities to represent and reason about aggregated structures (such as teams, organizations) and joint actions and states of multiple agents. For example, in air traffic, a pilot will maintain some SA about ATC, without making an explicit distinction between the air traffic controller he or she has contact with and the broader sociotechnical ATC system that includes the air traffic controller. This means that the MA-SA relation of a pilot does not need to point to a specific element of the state of the air traffic controller, but rather to some imaginary state that may not be maintained by any individual agent on the ground. A similar issue applies to a controller, who may maintain SA of the composite of a crew and their aircraft systems rather than of each of them separately. In the future, the mathematical framework will be extended to capture these kinds of composite and imaginary MA-SA relations.

Although the proposed MA-SA framework has been developed in support of both prospective and retrospective analysis, the current paper demonstrated only the latter. Hence another important direction for follow-up research is to explore how the proposed MA-SA framework can be applied within Agent Oriented Software Engineering methodologies, e.g. [17, 18]. In doing so, we may benefit from the experience gained in applying [13]’s early MA-SA version to agent-based safety risk modelling and analysis of novel operations in air traffic management [19, 20]; the formal modelling language used in these applications is a high level Petri net formalism that supports compositional multi-agent modelling within the theoretical setting of stochastic hybrid automata [21, 22].