Scenario design
Like in the first study, TGD influenced the design of the scenarios for the study on AR technology to foster collaboration and SA within and between emergency units. The scenarios were developed in a similar workshop as described above (see 3.1). During a half-day workshop, in which 6 members of the Dutch Police, the Netherlands Forensic Institute (NFI), and the fire brigade of the port of Rotterdam participated, 2 different scenarios have been identified.
The following sections describe the 2 identified scenarios. Compared to the earlier described 3 scenarios, the following scenarios were designed in order to evaluate the effect of the AR system on collaboration and situational awareness in the different teams (police, fire department and forensics). For that purpose, the scenarios are designed in such way that they can be played in two conditions: (1) with AR support for virtual co-location and (2) when using standard equipment following standard procedures.
Discovery of an ecstasy lab
A team of 2 policemen is informed about a situation via phone and arrive at an apartment. They discover a strange chemical smell and small chemical containers in front of the apartment (R). Before the policemen on the site enter the building, they receive information about the location as well as the current inhabitant from their remote colleague. After ringing the bell, the policemen on the site enter the building with approval of the inhabitant, who appears in regular clothes in front of the police team. The policemen recognize a strange chemical smell eminating from within the house. At the site, they are able to mark suspected objects, take images of the location and send it to a remote expert (P). Again, with approval of the inhabitant, the police team starts searching the site. They follow the strange scent, which is even stronger inside the building (R). When they discover an ecstasy lab in the kitchen full of chemical bottles, they arrest the inhabitant. The remote policeman calls the fire department for further support (M).
On arrival, the local firemen receive an oral briefing on the situation as discovered by the policemen on location (R). A team of 2 firemen enters the apartment. In the apartment, the firemen investigate the different rooms in order to secure the apartment for further investigation (P). They perform measurements on the found chemicals and the air quality. On clearance of the location, the remote fireman contacts the forensic institute for further investigation (M).
The forensic investigator receives an oral briefing of the location by the local firemen (R). After entering the apartment, the forensic investigator first analyses the site and sets up a research plan. This plan includes the marking of fingerprints on objects, collection of DNA evidence or the taking of pictures on the site (P). In discussion with a remote colleague, the local investigator refines the plan or asks for additional information from the fire department and police (M). Following the plan, the local investigator starts collecting evidence.
This scenario can be played in 2 conditions (with AR support and with standard equipment). When using standard equipment, the participants are only allowed to use their standard equipment for audio communication as well as a camera to take pictures for briefing and documentation purposes.
With AR support, one of the local participants wears an HMD for displaying augmented reality content and enabling virtual co-location with a remote colleague. Via a 3D user interface, the local participant can take pictures of the scene, annotate the scene with virtual objects, e.g. arrows, spheres, hazard symbols or evidence identification numbers, and share it with a remote colleague (see Section 4.5.3). The remote expert in addition can provide information to the local participant, e.g. on the inhabitant of the apartment or the found chemicals, or annotate the scene using the same instruments as the local colleague (see Section 4.5.2).
In both conditions, the location needs to be prepared with suspect objects and fingerprints beforehand. Additionally, one actor needs to play the inhabitant on the spot. Audio communication among the local and remote team members needs to be established using the standard equipment of the different organisational units.
Home visit by a VIP
A VIP plans a home visit (R). Just before the visit, a reconnaissance team has to check the apartment for safety. For their safety check, the reconnaissance team receives information on the address as well as the contact person living in the apartment. One member of the reconnaissance team goes to the apartment to check for safety. Each room of the apartment is investigated. During investigation, possible suspect and dangerous objects are discussed and checked with the local contact person (M). Dangerous objects are to be removed. Pictures are being taken to make it possible to identify changes when visiting the apartment with the VIP (P). When the apartment can be declared safe, the reconnaissance team informs the personal protection unit.
The reconnaissance team orally briefs the personal protection unit using the pictures that have been taken during the investigation (R). At a later time, one member of the personal protection unit arrives with the VIP at the apartment. Together they enter the apartment. During the visit, the member of the personal protection unit discovers a recent suspect change in the apartment (R) and decides to abort the visit (M). While the remote colleague provides information on possible evacuation routes, the VIP and the local member of the personal protection unit leave the apartment (P).
This scenario can also be played with AR support and with standard equipment. When using standard equipment, the reconnaissance team and the personal protection unit, use their standard equipment for audio communication as well as a camera to take pictures for briefing and documentation purposes. With AR support, the local team member wears an HMD for displaying augmented reality content and enabling virtual co-location with a remote colleague. Via a 3D user interface the local team member can take pictures of the scene and annotate the scene with virtual objects, to indicate that a suspect object has been checked and declared safe (see Section 4.5.3). The remote colleague as an example can provide additional information on the planned visit, the address or give information about the local contact person (see Section 4.5.2). In both conditions, the location needs to be prepared with suspect objects and changed after the visit of the reconnaissance team to simulate a possible dangerous situation for a VIP. Additionally, one actor needs to play the local contact person and audio communication among the team members needs to be established.
Participants
13 participants in total took part in the experiment. Participants were chosen randomly, due to their availability on the day of the experiment. All participants were male, with an age from 25-54 years (M = 37.8 SD = 10.0). All had a minimum of 2 years experience in their recent professional occupation. The most experienced had 12 years of experience in his field (mean = 6.3). 3 participants were forensic researchers from the Netherlands Forensic Institute (NFI). 3 were firemen from the fire brigade at the port of Rotterdam. 3 were policemen from the Dutch Police in North-Holland. 2 were from a close protection team in the Dutch police and 2 were from a reconnaissance team from the Royal Netherlands Marechaussee (RNLM), which is a gendarmerie corps, i.e. a police corps with military status. In addition to the above participants, 3 more members of the above organizations participated to play the roles of the inhabitant of the apartment in the ecstasy lab scenario, the contact person as well as the VIP. These 3 members were also involved in the design of the scenarios.
Materials
In this second study, our aim was to investigate how distributed security teams collaborate with AR technology, and which effect the AR technology has on situational awareness of these teams. We used a pre-questionnaire as first measurement method (see Table 5). With the pre-questionnaire, data was collected about the participants’ background, their experience in the domain with AR technology and their expectations towards the experiment.
Table 5 Questionnaire on the participants’ background, experience and expectations.
For the first run through the scenario, participants were given the technology currently available in the field, such as their standard issue communication equipment and a camera. For the second run, one local participant used the AR support system described in chapter 3, to establish virtual co-location with a remote colleague. When using AR support, participants also used their standard communication equipment. After both rounds, a questionnaire was provided to the participants, which consisted of two sets of questions. Table 6 shows the questionnaire for the participants using AR support. The questionnaire for the participants when having no AR support only differs with regard to question 2.2. The first two sections of the questionnaire are related to the experiment itself. The third section assesses the quality of collaboration, by asking questions along the 7 dimensions of collaboration quality as introduced by (Burkhardt et al. 2009).
Table 6 Questionnaire on collaboration quality and situational awareness with AR support.
As we discussed in section 2.2, situational awareness includes the perception, comprehension and prediction of each other’s actions within a given situation in order to align and integrate the team members’ actions. The fourth section of the post-questionnaire consists of a self-rating of the individual situational awareness. Several different measurement methods exist for measuring the level of situational awareness. The measurement approaches include freeze probe techniques, real-time probe techniques, self-rating techniques, observer rating techniques, and performance measures (Salmon et al. 2009). Very little measurement approaches exist for distributed or team situational awareness. For the questionnaire we use the validated post-test self-rating technique (Taylor 1990) as this avoids the freezing of action during the test, like when applying the SAGAT method (Endsley et al. 1998). Even though the freeze-probe methods provide more significant data, it has the important drawback of interrupting an action, and thus may negatively affect performance. Self-rating techniques such as the SART questionnaire are administered post-trial, and thus have a non-intrusive character. Furthermore, in their study, (Salmon et al. 2009) come to the conclusion that a post-test self-rating technique is applicable whenever “SA content is not pre-defined and the task is dynamic, collaborative, and changeable and the outcome is not known (e.g. real world tasks)” (Salmon et al. 2009). By assessing the individual SA, the team SA can be judged as well as this is defined as “the degree to which every team member possesses the situation awareness required for his or her responsibilities” (Endsley 1995).
Finally, after each experiment, a structured de-briefing was used to further investigate the experiences of the participants with the technology, their self-rating collaboration quality and SA. Two video cameras were used to record the experiment in order to conduct a qualitative analysis, again along the seven dimensions described by (Burkhardt et al. 2009). One video camera was placed to record the actions and communications on the spot (local person), the other was recording the actions and communication of the remote person. This camera was also used to record the de-briefings.
For analysis, we consider the ordinal scale for the 5-point and 7-point Likert based questionnaire. To interpret and report results, we use the median values and the interquartile range indicators, derived from the answers to the questions. In addition, we use p-value of two-sided Wilcoxon rank sum tests to determine whether the questionnaire data for the same Likert items are valid for comparisons.
Table 7 illustrates the categories taken into account for the statistical analysis. The six categories C01-C06 are linked to the ecstasy lab scenario. From these six categories, three are representing experiments using AR support C01-C03 and three are representing experiments without AR support C04-C06. The VIP scenario is studied using the four categories (C07-C10). From these four categories, the three categories C07-C09 are representing experiments using AR support and C10 is representing the experiment without AR support. In addition, the six categories C11-C16 are not dependent on the scenario. Out of these, the two categories C15 and C16 are not dependent on the role played by the participants during the experiment sessions.
Table 7 Categories per scenario, condition and role.
To derive relevant observations from the data, the medians are used as primary comparison criterion. The comparisons take into account valid pairs of categories, which in turn, relate to the experiments from the same scenario and role. The categories C11-C14 are exceptions in the sense that they refer to experiments on both scenarios. Still, C11-C14 consider the role played during the experiment while C15 and C16 just distinguish whether AR support was used or not. Table 8 displays the pairs of categories for investigation. Please, note that the categories C7 and C9 are not used for comparison, as the non-AR VIP scenario was played without a remote colleague, as this resembles current work practices.
Table 8 Pairs of categories for comparison.
Procedure
All experiments took place indoors in a real training environment at the Netherlands Forensic Institute (NFI). The testing altogether lasted one day. Figure 7 shows the plan of the CSI lab at the NFI. The upper highlighted box shows the plan of the apartment that was used as ecstasy lab and as the location for the house visit. The apartment consists of four rooms, i.e. a bedroom, a bathroom, a kitchen and living room combination and an entrance hall. The orange highlighted box in the middle of the plan resembles a typical Dutch street. During the experiment, this area was used by the different emergency teams to orally brief each other about the situation. The lower highlighted box shows the location for the remote colleague and further activities around the experiment, like briefing and de-briefing. The location is physically separated from the apartment by walls and doors, so that remote and local persons could only interact via the available technology.
All participants of the experiments were given a slide presentation to introduce the goal of the experiment. In addition to this general presentation, the participants of the ecstasy lab scenario experiment, i.e. 3 policemen, 3 firemen and 3 forensic investigators, were given a presentation on the general outline of their scenario with and without AR support. The same applies to the participants of the VIP scenario, i.e. 2 members of the close protection unit and 2 members of the reconnaissance team.
Each of the scenarios was played 2 times with and without AR support. First, the scenarios without AR support were played. Then, the scenarios with AR support were played. Between each round, the setup of the apartment was changed to avoid sequence effects. These changes included moving evidence from one location to another in the ecstasy lab scenario, or hiding different suspect objects in the VIP scenario. In addition, the roles of the participants were rotated to allow all participants to experience the local and remote role, e.g. a fireman who in the first round had the role of the remote colleague became the local fireman with AR support in the second round.
After the introductory presentation, all participants were asked to fill in the pre-questionnaire (see Table 5) simultaneously. After each round, all participants were asked to fill in the post-test questionnaire (see Table 6) and participate in a structured de-briefing session.
Compared to the previous experiment, the remote and the local user were both able to interact and manipulate the virtual content, using a classic 2D graphical user interface (for the remote user) and a 3D user interface with hand gestural input (for the local user). For each scenario and for each role the participants had, the user interfaces were customized according to their specific requirements. To become acquainted with the AR system, each participant group was trained on the remote user interface as well as the 3D user interface for the local user.
Distributed Collaborative Augmented Reality Environment (DECLARE)
In order to support the new scenarios, we extended our DECLARE framework (see Figure 8). Apart from a few minor changes in all components, major changes were made to the local user AR support component. These changes were necessary to enable local users to interact with the virtual content. For that purpose, the RGB-D camera of the HMD was used to enable hand tracking and implement a 3D user interface, allowing users to interact with the system with their bare hands. The following sections describe in detail the changes compared to the first evaluation round and explains the functionality available for local and remote users.
Localization and mapping
Compared to the implementation described in section 3.5.1, RDSLAM (Tan et al. 2013) offers an improved initialization phase and more importantly, it supports placing virtual objects in the updated version.
The remote user can initiate the initialization step, by pressing a button on the user interface. Again, the local user has to horizontally move the camera of the HMD, from left to right, and during this process the best frames are selected automatically, in order to set the 3D coordinates of the system. Re-initialization can be done at any moment by the remote user, but since this means a new coordinate system will be set, all virtual objects that are not in fixed position on the screen will be deleted, as their location will not fit in the new coordinate system.
Secondly, the updated RDSLAM algorithm offers access to the entire cloud of points recognized until the current moment, offering a higher precision for placing virtual objects. For example, in Figure 9 the yellow points represent the current tracked points, the blue points represent the whole cloud of points recognized until the current moment and the red ones represent invalid ones.
Remote user AR support
Besides the actions described in Section 3.5.3, the remote user is now able to perform additional actions and place additional virtual objects, by selecting the corresponding menu item in the left part of the 2D graphical user interface. Apart from the possibility to initialize and re-initialize the tracking via RDSLAM, several other actions were added to the 2D user interface of the remote user. The following subsections describe these additions and relate these to the scenarios.
Placing virtual objects superimposed on the real world
In addition to the 3D spheres, 3D blocks, 3D arrows, laser scanning markers and text notes already used in the previous experiment (see Table 2), remote users in the Ecstasy lab scenario can now place additional virtual objects (e.g. hazard symbols, DNA and fingerprint labels, barcode labels) to annotate the real scene (see Figure 10). The hazard symbols are used to indicate different dangerous substances, classified in 13 categories, depending of the kind of danger they represent (e.g. explosive, radioactive, chemical contamination etc.). The DNA labels are attached to real objects from which samples need to be taken for DNA analysis. Similarly, the fingerprint labels indicate areas to be checked for fingerprint traces. The barcode labels, also called SIN in Dutch, are attached to evidence for a later identification. All virtual objects are meant to trigger interaction and collaboration among the team members and the different involved organisations. As example, consider a policeman marking suspicious chemical substances with a 3D sphere, a firefighter checks the substance and places the corresponding hazard symbol and the forensic investigator decides based on the mark-up on whether and how to collect evidence. The latter is then indicated by text notes and probably a barcode for the evidence number.
Figure 11 shows some of the above symbols when they are placed within the environment. At the wall in the back, e.g., there is a DNA symbol, at the carpet in the front there is small hazard symbol and on the book on the table there is a fingerprint symbol.
Loading pictures taken with the HMD camera
The names of the pictures saved on the server appear in a list, which the remote user can choose one to display, either in a fixed or in a relative position. A picture in a fixed position is mainly meant to provide additional information the local user. When a picture is displayed in a relative position, this position resembles the position at which the picture has been taken. This is to support detecting suspicious changes.
Changing the colour of the virtual 3D objects
In the Home visit by a VIP scenario, the remote user can change the colour of a selected sphere, cube or arrow by pressing the R, G, or B key to colour the object correspondingly in red, green or blue (see Figure 12). The different colours can be used to indicate different levels of importance for the annotations. Initially, an object in the apartment might for example be marked with a red sphere, as it found to be suspicious. After consultation with the local inhabitant, considering additional information, or discussing the object with the local colleague, the colour of the sphere might be changed into green, as the object is not suspect anymore.
Local user AR support
The local user wears an optical see-through HMD and the 3D user interface is adapted for the HMD from META (see Figure 6). The 3D user interface supports free hand interaction with the environment. The local user is now able to interact with the virtual environment, not just visualize it.
If the right hand of the local user is in the view of the HMD depth camera, the hand cloud of points appears, as it can be seen in Figure 13. The hand is recognised when a small circle is displayed on the top of one finger (which is the upper most positioned finger on the vertical axes).
We designed a 3D user interface that allows local users to take specific actions depending on their role in the different scenarios as specified above. All actions fit into the following categories:
-
1.
Taking pictures with the HMD camera
-
2.
Placing virtual objects that are superimposed on the real world using tracking points provided by the RDSLAM component
All actions can be triggered if the pointing circle on the recognised finger stays for 1.4 s over a menu button. The threshold of 1.4 s was empirically set in a user study with 10 different users having different background in the use of AR systems. In this study, we noticed that 1 s was too quick in order to clearly identify the local user’s intention and 2 s was too slow and led in some cases to exhaustion of the local user.
Taking pictures with the HMD camera
The local user is able to take pictures with the HMD camera and store them in the shared memory space of DECLARE. The picture is taken 3 s after the action was triggered, so that the local person has time to remove the hand outside the view of the camera. The local user has further the possibility to save the picture or to delete it (see Figure 13). When saved on the server, the picture is automatically assigned a filename. This is done to save time for the local user. The filename is unique and allows photos to be ordered according to the time being taken.
When the local user takes a picture, the current position of the HMD camera as computed by RDSLAM (Tan et al. 2013) is used to place a virtual object containing the picture. When a user selects such an object, the picture is displayed in a fixed position over the whole display in the HMD.
This is to support comparing the current real world situation with a picture taken earlier. This functionality is especially important for the VIP scenario. In this scenario, the reconnaissance team might take pictures of the local environment, as it is considered safe. The personal protection unit might check upon the pictures to identify changes to the environment. In case of suspicious changes, the VIP visit might aborted.
Placing virtual objects that are superimposed on the real world using tracking points provided by the RDSLAM component
If a virtual object is created (e.g. the action of the first 3 buttons in Figure 14), it follows the movement of the recognised finger. To place the object in space, the finger has to be kept still for the same amount of time of 1.4 s. The coordinates of the object are computed by the RDSLAM component of DECLARE that returns the closest tracked point from the cloud of points detected by the tracking algorithm until that moment.
A virtual object is selected or deselected when the centre of the pointing circle of the recognised finger is hovering over that virtual object. A selected object can be resized, repositioned or deleted. To return to the main menu, the selected object has to be deleted, deselected or the button MAIN MENU has to be triggered (see Figure 15).
In each of the two scenarios, local users can place different virtual objects. Table 9 gives an overview of the virtual objects per scenario. The 3D spheres, blocks and arrows are used in both scenarios to mark or indicate to certain objects that require a special attention. The hazard symbols, DNA and fingerprint labels and the barcode labels can be used by the local user to annotate the scene. Annotating the scene with virtual objects supports information exchange between the local and remote users as well as among the different organisation involved in the different scenarios. As described for the remote user, a suspicious object in the Ecstasy lab scenario might be marked by the police, checked by the fire department and secured for evidence by the forensic institute. In the VIP scenario, suspicious objects in the real scene might be initially marked with, e.g., spheres coloured in red and after discussion with the remote colleague, the remote colleague might clear the object and mark in it green. This would indicate to the personal protection unit that a suspiciously looking object was checked for safety.
Table 9 Available virtual objects per scenario for placement in the real world.
In Figure 16 (left), the menu for placing hazard symbols can be seen in the view of the local user. The right side of the same figure shows the menu for placing fingerprint and DNA labels. The SIN button allows the selection of a barcode label that identifies evidence.
Results
This section reports on the results of the study on collaboration and situational awareness. In the following, we firstly discuss in detail the quantitative results from the questionnaires and secondly the qualitative results from the de-briefings.
Results from the post-test questionnaire
Table 10 presents the size of each set of data points for each of the 16 categories defined for the study. There were seven exceptions of missing data, one in the category C04, item [4.7], one in category C06, item [4.7], one in category C10, item [3.4], one in category C12, item [3.4], one in category C14, item [4.7], two in category C16, items [3.4] and [4.7].
Table 10 Size of the questionnaire data set.
Given the Likert items from the questionnaire, an exploratory factor analysis identified two scales: collaboration quality (five items; Cronbach’s α = 0.98) and situational awareness (seven items; Cronbach’s α = 0.97). In order to compare the medians of the data sets C01 to C16 as specified in Table 8, a statistical significance test is run. First, to test if the data is from a population with a normal distribution, the Anderson-Darling test is used. For some items and categories (234 out of 256 test cases), the data sets are not from a population with a normal distribution. In C08, the sets of data points per category and item are too small so that testing for a normal distribution is not possible (for AD test, at least 4 samples per set are required). Secondly, to test whether the data in two sets, are samples from distributions with equal medians or not, a two-sided Wilcoxon rank sum test is used.
Table 11 shows partial results of the medians, interquartile ranges, and p-values for each test run. Only the pairs of categories, for which the statistical tests lead to the rejection of the null hypothesis, provide solid statistical proof while comparing the medians. The cases providing statistically valid comparisons are highlighted in green. The complete set of test results is presented in Appendix I.
Table 11 Medians, interquartile range, and results of two-sided Wilcoxon rank sum tests per category (p-value).
The results for the ecstasy lab scenario, indicate the level of arousal [4.4] is lower (Mdn = 5, IQR = 2) when using the AR system (C01), compared to the standard approach with no AR (C04), for both local and remote user (Mdn = 6, IQR = 1.5) (p = 0.0046 < 0.05). For the same scenario, the arousal [4.4] is lower (Mdn = 4, IQR = 4) for the local user wearing the AR HMD (C03), compared to the standard procedure with no AR (C05), (Mdn = 6, IQR = 0) (p = 0.0273 < 0.05). In the same scenario, both local and remote users (C01) using the AR system, focused on a lower number of aspects [4.5] (Mdn = 5, IQR = 1.5) than in the standard procedure that involves no AR (C04) (Mdn = 6, IQR = 0.5), (p = 0.0025 < 0.05). Additionally, the level of attention for the user wearing an HMD in this scenario was lower (Mdn = 3.5, IQR = 3), compared to the standard approach without AR (Mdn = 6, IQR = 0.8), (p = 0.0016 < 0.05). The division of attention [4.6] was lower (Mdn = 4, IQR = 3) for the local user wearing the HMD during the ecstasy lab discovery scenario (C03), as compared to using no AR support at all (C05) (Mdn = 6, IQR = 2.8), (p = 0.0315 < 0.05).
The same can be observed when considering both scenarios together. The level of arousal [4.4] (Mdn = 5, IQR = 3.3) for the local user wearing a HMD (C11) is lower than the level of arousal of the local when no AR support is used (C12) (Mdn = 6, IQR = 0.8), (p = 0.0271 < 0.05). Similarly, the concentration level [4.5] of the local user is lower when using an AR HMD (C11) (Mdn = 4, IQR = 2.3), compared to using no AR system (C12) (Mdn = 6, IQR = 1.5), (p = 0.0003 < 0.05). Further to this, the attention level [4.6] of the local user is lower when using AR HMD (C11) (Mdn = 5, IQR = 2.3), compared to using no AR support (C12) (Mdn = 6, IQR = 3), (p = 0.0351 < 0.05). The mental capacity [4.7] of the local user is lower when wearing an AR HMD (C11) (Mdn = 4, IQR = 2) as compared to not using AR support at all (C12) (Mdn = 6, IQR = 1), (p = 0.0149 < 0.05).
The level of arousal [4.4] is lower for the AR users (C15) (Mdn = 5, IQR = 2.3) then for the non-AR users (C16) (Mdn = 6, IQR = 2), (p = 0.0115 < 0.05). A similar effect on attention [4.5] is for the AR users (C15) (Mdn = 5, IQR = 1.3) as compared to the non-AR users (C16) (Mdn = 6, IQR = 1), (p = 0.0009 < 0.05).
Table 12 illustrates the results for demand, supply, understanding and overall SART scores, per category. An overall SART score is derived based on the formula: SU = U − (D − S) (Taylor 1990), where U is the summed understanding, D is the summed demand and S is the summed supply. The understanding indicator is computed using the Likert items [4.8] and [4.9]. The demand indicator uses the set of Likert items [4.1], [4.2] and [4.3]. The supply indicator uses the set of Likert items of [4.4], [4.5], [4.6] and [4.7]. The highest average overall SART score was 19.50 for the remote users using AR support in the ecstasy lab scenario (C02). This category also had the highest overall SART score (33), together with the other three categories (C01), (C13) and (C15). The lowest average overall SART score was 10.17 for the local using the AR HMD in the ecstasy lab scenario (C03). The highest average overall understanding (23.50) holds for two categories, i.e. for the remote users without AR support in the ecstasy lab scenario (C06) and for the remote user without AR support in both scenarios (C14). The lowest value for the overall understanding per scenario (8) was registered for the categories (C01), (C03), (C11) and (C15). From these four categories, the first two categories (C01) and (C03) focus on the ecstasy lab scenario. The lowest average overall understanding (15.17) was for the local user with AR HMD in the ecstasy lab scenario (C03).
Table 12 Results for demand, supply, understanding and overall SART scores per category.
Results from the de-briefing
The de-briefing of the scenarios without the use of AR technology shows that the participants value their current technology as sufficient in the first instance. Nevertheless, they also experience clear limitations of the current technology. Both the police team and the firemen in the ecstasy lab scenario used their cell phones to collect some visual material of the scene. The teams then used the material collected for the briefing of the next team. They noted that pictures taken by their cell phones, lack enough detail for proper briefing. One participant stated that sometimes he just recognizes that he is in need of further information when he is at the scene himself, but only after the other team already left.
Two main issues were raised within the de-briefing of the scenarios with the use of AR. The first one was that the majority of the participants mentioned that the role of the remote person, with the possibility of sharing the local view of the scene, to add information immediately and to take pictures of the scene that can be used later on, was an important added value of the new technology. With these abilities, the remote person can give advice and provide directions in stressful situations. It was reported as being very useful that the remote user can easily take pictures from the scene, while it is much harder to do it with the hand tracking method available to the local user. Especially, the remote user valued the AR technology as to have great potential. One limitation to the role of the remote user was also reported. The officers working in the close protection field stated that the AR technology would not be that useful in dynamic, threatening situations, as a local has to respond immediately to any danger occurring and that there would be no time and room for waiting and relying on another person’s opinion. The advantage of the remote user in the AR scenario thus was summarized as an advisory one, but not as having an important role in on the spot decision-making and action taking process.
The second issue targets the situational awareness of the whole process. When one participant stated that by participating in the experiment “you are getting more aware of the other parties involved in the whole process and that your actions do have consequences for their work”, the other participants agreed that the experiment increased their awareness for the process as a whole, and their own role in it. The experiment showed clearly that each on the spot action has consequences for the work of other emergency services in the process, and that proper information transfer is crucial. AR technology can support the provision of information, but is seen as a means to increase situational awareness in first place.
The majority of the participants agreed to the observation of one participant, that the AR technology introduces a higher workload, which could distract from crucial tasks within such a situation. One solution to this challenge discussed by the participants was that a new role could be introduced, like an AR expert, who accompanies the regular security team and handles the HMD-driven data collection on the spot.
Finally, participants can imagine the use of the AR technology for big events and for training. Participants especially considered it helpful, when several local users could wear an HMD to share their view with several remote users, who then collect and analyse the data to provide analysis results to the local users. Finally, a combination with GPS is considered as potential added value when being used for the recognition of places and objects.
Discussion
Table 13 presents in short the overall findings of the study on collaboration and situational awareness.
Table 13 Overall results of the study on collaboration and situational awareness.
The experiment further showed that participants, both local and remote, experienced lower arousal with AR technology, compared to the same scenario without AR technology support. Additionally, reported focus and attention level were lower with AR technology. Participants also reported that they had less mental capacity while using AR technology than while not using it. This issue could be related to the fact that the AR technology was new to all participants and that they had to adapt to the system, which asks for additional mental capacity to the situation when not using AR technology. Related to the outcomes from the de-briefing, this result matches with the experience of a high workload by the participants.
Operational units rely on quick and adequate access and exchange of accurate context-related information (Lin et al. 2004). The exchange and access of information is further a prerequisite for SA (Endsley 1995) and up-to-date information facilitates and maintains situational awareness of operational units (Straus et al. 2010). The experiment showed that AR technology can be used for context-related information access and exchange in the safety domain. While current technology (mostly mobile phones) is very limited in the ability to record and share a detailed picture of a crime scene, AR technology enables users to focus on details and to support oral communication on details in the crime scene. On the other hand, the strong focus on details sometimes hindered the ability to gather the bigger picture of the scene. Still, the possibility to share information among the different organisations, using AR clearly showed to the participants that their actions have consequences on the work of other emergency services in the process and that proper information transfer is crucial. Thereby, AR indirectly increased the awareness of the participants, for inter-organisational collaboration and their own role in it. This is in line with (Reuter et al. 2014) who identified that shared information increases awareness along the organizational chain.
The experiment also illustrates shortcomings of the current technology. Some policemen experienced difficulties due to the temporary loss of visual tracking, which was caused by a very high pace of the tasks and to an improper calibration for the marker-less tracking. As the used RDSLAM system (Tan et al. 2013) relies on a computer vision-based algorithm, the quality of the calibration and online tracking strongly depend on both the richness of visible patterns (for the calibration step) and the good illumination conditions in the physical environment. Occasional technical issues were noticed during the experiment for interacting within the AR system in such conditions. The participants pointed out that some actions were slower than in real operations.
The de-briefings clearly show that the participants see the most value of the AR technology, in introducing a remote user with whom audio and video is shared in real-time. This new role, including the ability to easily interact with the scene through the AR system by placing virtual objects, setting marks or taking pictures, is evaluated as an added value to the work at a crime scene. The remote user is considered as a useful advisor in stressful situations and can provide the external support that action teams depend on (Sundstrom 1999). Using AR for such a virtual co-location of remote users might thus address the mismatch of the information needs of operational units and the ability of ICT to provide the information (Manning 1996; Sawyer and Tapia 2005). It was very beneficial that the interaction with the AR system was very easy for the remote person. The value of the remote user is also supported by the results of the post-test self-rating of SA. The remote user in the ecstasy lab scenario received the highest score for individual SA, and scored highest on understanding the situation. The de-briefing showed that the collaboration with the remote user also lead to a higher team SA, as participants playing the local role greatly appreciated the advice and actions of the remote user.
The ability of simultaneously sharing the view of the crime scene is also seen critically related to privacy issues. As contact persons might not know who is connected to the AR system, the technology might not be accepted in all places, e.g. work with VIPs. On the other hand, all participants mentioned the usefulness of the AR technology for big events and for training purposes.