Keywords

1 Introduction

Emergency management is a critical and continuously evolving research area, where each single step to improve either methods or tools make a significant contribution towards reducing human lives and resource losses. The awareness about this stimulates professionals and researchers from the crisis management field to devote much effort to define future research directions, whose results are in fact essential for drawing up an agenda by public institutions and Civil Defence agencies to identify sectors where investments could produce effective solutions.

The usage of Emergency Response Information Systems (ERIS) for the management of activities meant to reduce the number of victims and damages and restore quickly a safe situation, is largely promoted by several agencies, along with the involvement of trained personnel for their immediate deployment. Besides traditional sectors like geology, construction science, structural engineering, material science and technology, information and communication technology (ICT) represents an across-the-board sector that would contribute to enhancement in all aspects of crisis management. Currently, ICT already supports several aspects of crisis management and it is paramount in some relevant activities that require promptness. Further ICT advances are required anyway, which could enable teams of practitioners to quickly make the appropriate actions so as to further decrease losses both in terms of people and damages [2]. Towards that goal, several research directions have been identified, all sharing the observation that experiences of different actors and contributions from relevant domains represent the only means to achieve stable and reliable solutions for the crisis management [14].

In particular, the four phases of the emergency management process (preventing, preparing, responding and restoring) have significantly benefited from the adoption of integrated systems where procedures and standards are embedded within seamless infrastructures. Moreover, relying on efficient organizational structures and effective mechanisms for collaboration among operators represents a key factor when designing an emergency management system. In particular, the emergency preparedness and response phases include actions taken prior as well as during and after a disaster event in order to reduce human and property losses. Such actions can be performed only if an overall view of the evolving situation is available to those who make global strategic decisions from a Command and Control Center or Centro Operativo Comunale (COC), and to those who perform actions on the ground.

However, as argued by Jennex in [7], during a real situation where people are under stress, the use of emergency information systems is often hindered by the lack of familiarity with them. Unfamiliarity can notably impact the effective use of a new technology in crisis situations. In this paper we propose a solution to the general concern of training emergency responders so that they become familiar with the adopted mobile technology and hence benefit from the enhanced situation awareness. In addition to the continuous technical skill-upgrade required by the nature of the humanitarian context, the importance of an appropriate training is widely recognized by all the actors playing a role in the emergency domain. In particular, giving responders information technology skills that help them to address health, security and managerial concerns, represents a key factor to pursue the goal of an efficient and effective humanitarian emergency response. Moreover, as shown in [5], enhancing the role of the on-site operators can improve the collaboration among responders, and provide all the actors with an increased situation awareness about the crisis evolution. Situation awareness and shared mental models is gained when information is gathered from multiple perspectives, acquired from the environment, or received by voice, or encoded in artifacts [9]. Our proposal combines the pervasiveness of mobile technology, its adoption for collaboration purposes with the intuitive interaction gained through augmented reality, and its capability to engage and motivate trainees, also thanks to the impactful visual cues provided by a visualization technique named Framy [12]. In particular, in this paper we propose an AR-based training system that, through two different interactive visualization modalities leads the trainee within a scenario enriched by a virtual content where data can be aggregated and associated with visual metaphors. By performing a set of suggested activities the trainee acquires familiarity with the underlying technology both in terms of functionality and participation in the whole decision making. As a matter of fact, he/she is immediately informed about the effect of his interaction thus improving his/her situation awareness.

The remainder of the paper is organized as follows. Section 2 presents some related research on training methodologies and techniques in the domain of emergency management. Section 3 recalls the technology, which underlies the AR-based training system. In Sect. 4 we explain how the proposed training system works, describing its adoption inside a realistic scenario of use during a training session. In Sect. 5 the system architecture is described. Some conclusions are finally drawn in Sect. 6.

2 Related Work

The re-creation of realistic environments where emergency response simulations can take place is considered paramount for effective training in emergency situations [4]. Jackson et al. [6] underline the importance of training activities that rely on realistic operational or situational scenarios. They also highlight that by learning from the effects of simulation training, crisis management agencies may gain better acquaintance with respect to the preparation for actual event management. Such beliefs have greatly encouraged the use of Virtual Reality to simulate crisis management activities as a way to increase safety standards, while retaining efficiency and reducing training costs. Similar benefits coming from the adoption of VR intelligent simulators have long been experienced in the general field of education and training [8]. Compared to traditional training techniques, the trainee becomes an actor of the simulated scenario and improves his/her cognitive and spatial skills and understanding through practice.

Several systems have been proposed to assist emergency management teams during training activities within immersive environments [10]. However, one issue with the adoption of 3D virtual environments is that their construction is hard and in most cases restricted to specific emergency situations.

Aedo et al. [1] have suggested as a solution the design of emergency services training software tools for Emergency Planning, which are highly configurable, easy to use, and capable of reproducing different scenarios. With their simulation authoring system they emphasize the need to allow simulation designers to overcome problems related to the complexity of 3D virtual environments and to represent realistic situations and different action paths that can support the training processes. We fully embraced this thesis and decided to investigate the use of AR technology as a way to increase trainees’ engagement and motivation. The system we propose originated from the idea to enhance the efficacy of existing training procedures allowing emergency responder trainees to exploit AR interaction and become quickly familiar with the mobile technology adopted today in response activities.

3 Background

In this section we briefly recall the technology underlying the AR-based training system we present, namely the information visualization technique Framy, the AR mobile interface Link2U and the spreadsheet-mediated collaborative system developed for the emergency management.

Framy was conceived to enhance geographic information visualization on small-sized displays through qualitative synthesis of both the on- and off-screen space [11, 12]. By displaying semi-transparent colored frames along the border of the device screen, users are provided with clues about the object and phenomena distribution. Each frame is partitioned and colored with a saturation index, whose intensity is proportional to the result of an aggregate function which summarizes a property of the objects and phenomena located in the corresponding map sector either inside or outside the screen. The higher the result, the greater the intensity.

Figure 1 illustrates a mobile device embedding Framy. The frame is partitioned into 8 yellow colored portions. The intensity of each portion is proportional to the number of POIs located around the map focus.

Fig. 1.
figure 1figure 1

An example of (on/off-)screen subdivision accomplished by Framy (Color figure online)

As for Link2U, it is an integrated solution for mobile devices which combines the potential of augmented reality with the ability to “communicate” of the social network [3]. It is based on two different visualization modalities, namely MapMode and LiveMode. The former is shown in Fig. 2(a), it corresponds to the classic two-dimensional map view, where paths and geographic objects of interest are drawn on the map. The latter is illustrated in Fig. 2(b), it exploits the augmented reality both to improve users’ sensory perception about objects located inside the camera visual field, and to provide visual clues of those located beyond that can be visualized as aggregated data.

Fig. 2.
figure 2figure 2

(a) MapMode visualization of POIs and users’ position. (b) LiveMode representation of a user’s position.

Finally, the work described in [5] represents a solution to some relevant requirements distilled during one of the earthquake simulation events that the Italy Civil Defence Agency periodically organizes in seismic territories. The result consists of a spreadsheet-mediated collaborative system that combines the advantages of mobile devices with the high potentials of spreadsheets for supporting operators acting on a wide geographic area and requiring advanced tools for geodata collection and management. Figure 3 shows how quantitative and qualitative aspects of a situation can be displayed and shared on the same device. In particular, Fig. 3(a) shows a spreadsheet shared among on-site operators about census information of a gathering area. In Fig. 3(b) the same information is summarized and visualized through the Framy visualization technique.

Fig. 3.
figure 3figure 3

(a) the spreadsheet-based information and (b) the Framy view of it

In order to achieve the goal of our present research work, the above mentioned visualization techniques have been combined and embedded into a unique application to allow users of more complex systems to perform training sessions and acquire familiarity with the underlying advanced technology. In particular, the Framy capability of synthesis may help on-site responders and decision makers working in critical situations identify more convenient solutions which may not be directly seen from elsewhere, and evaluate constraints not detectable from remote sensors. Moreover, an overview of the surrounding world, captured through MapMode and then LiveMode, can help users generalize or detail the content of a given space. Finally, the integration of AR-based functionality can enhance users’ awareness about a situation and provide them with an improved user experience.

In the next section we explain how the proposed training system works, describing its adoption inside a realistic scenario of use during a training session.

4 The Use of Augmented Reality to Improve Trainee’s Experience

In [5] we described a contextual inquiry conducted in collaboration with the Civil Defence Agency of the town of Montemiletto, in the South of Italy. We observed the emergency management activities carried out during a simulation event and were able to understand the importance of performing appropriate training activities through periodic simulations. Indeed, each emergency plan ends with a validation phase aiming at facing possible exceptions caused by both human factors and temporary objective impediments. During that phase it is relevant to schedule targeted training activities which may contribute to tune the involved parameters (residents, personnel and tools) of the underlying protocol, by taking into account both general requirements by national regulations and local availability and supply.

In the present section we depict a scenario illustrating how the system works when it is used for training sessions addressed to the emergency responders. Basically, two main categories of actors are considered, the designer of the simulation scenarios and the system stakeholders. The first category comprises people involved in the design of the simulation tasks with a deep experience in the field of the emergency management and in the conception of related evolving scenarios, for training purposes. The system stakeholder category includes people from the emergency response team, who have to learn both the system functionality and the protocols for the emergency management. The aim is to provide emergency trainers with a usable tool by which as many realistic situations as possible could be simulated.

In the following scenario, the simulation event takes place in Battipaglia (Salerno), a city in the South of Italy (Geographic coordinates 40.617 °N 14.983 °E). Basic information about the simulated situation is initially shared among participants, as follows. The territory is divided into 4 gathering area and 3 shelters. Each GA is associated with three local responders. The decision on how to distribute evacuees across the shelters is made taking into account the number of people to evacuate, information about their family composition, the number of vacancies at each shelter and the road status.

Once the simulation event is started, a configured map is deployed by on-site responders/trainees, as shown in Fig. 4. The green line identifies the involved gathering area (GA1) while the cyan area indicates the associated shelter. Red polygons highlight seven crashed buildings. Some of them are also associated with a red cross which identifies places where people have died or are injured (black and blue numbers, respectively). The configuration map contains also the coordinates where roads are interrupted and traffic must be redirected, a no-entry signal is used to illustrate those interruptions.

Fig. 4.
figure 4figure 4

The configuration map at t0 time

Figure 5 shows an example of a see-through image appearing to the trainee. Basically, in order to understand whether a building is damaged, the trainee should reach it and verify its current status. To make the status of a damaged building immediately evident, in the LiveMode modality it appears surrounded by a red line filled by a semitransparent red color. Moreover, information concerning the building and the associated GA is visualized on the left side. On the top, Building info summaries the current situation about people living in the building, e.g., 3 families, 10 persons, the number of the currently recorded ones, the number of missing people, and finally, their health status, e.g., sane, injured or deceased. In this way, the trainee has an immediate view of the situation and can perform the appropriate operations. On the bottom, there is a general description of the gathering area he is managing, thus providing him with a complete view of the emergency management plan execution.

Fig. 5.
figure 5figure 5

The building as it appears to the trainee when its status is changed to crashed (Color figure online).

Once the trainee has reached the building, he has to perform the actions expected by the protocol. Each action is simulated on site and then it actually results in an interaction with the underlying system, such as notifying the building status and updating any number about the Building info. In particular, he needs to check both the status of the building and the condition of people living there (as shown on the top-left side by the training software). In this case, the trainee can interact with the other software components, namely Framy and the collaborative spreadsheet. Framy allows him to identify the building on the map through the MapMode view. Then, he has to update the new status. Moreover, he has to add information about the current status of residents starting from data shown in the AR-based LiveMode view. Figure 6(a) illustrates the spreadsheet with which the trainee can interact in order to list residents and update their status. Once he has completed the survey, his modifications immediately appear in the AR-based LiveMode view and the area surrounding the building is set to green, as shown in Fig. 6(b).

Fig. 6.
figure 6figure 6

(a) the spreadsheet for people status insertion, (b) the new AR-based view (Color figure online)

The usage of the AR-enriched LiveMode view along with the collaborative spreadsheet and Framy allows the trainee to immediately provide the COC with information about the current crisis evolution. Moreover, he is timely made aware of the contribution he has produced within the task thanks to the prompt update displayed on the screen. This aspect is fundamental to gain a more collaborative involvement by trainees during the training session because it generates a greater situation awareness and stimulates confidence in the new technology.

5 System Architecture

The client-server architecture on which the system is based reflects the dichotomy of the actor categories: experts-server vs trainees-clients. In particular, this architecture may be decomposed into several components. As shown in Fig. 7, some components, such as SIRIO, are provided by third-party and are necessary for preserving experts’ knowledge and handling the current emergency management processes. In particular, the server side consists of three modules fully interoperable with SIRIO. The first one is addressed to the global management of the system, it integrates SIRIO with the other modules in a single environment and distributes data among parts. The information sharing is managed by the information sharing module (ISM) while the training management is delegated to the training management module (TMM). Basically, the ISM module captures the information generated by the SIRIO module in order to share it with clients and, vice versa, it receives information by the clients and forwards it to SIRIO to be computed. The TMM module works as a client even though it is located on the server side, it is necessary to build scenarios for the trainees. It embeds a GIS and allows to describe the evolution of the emergency in a real-time mode. In fact, the changes applied to the emergency map are automatically sent to the ISM and then to the clients. When they receive those updates, they manage them by using the VR module located on the client side.

Fig. 7.
figure 7figure 7

The use of the collaborative system during a simulation session

As previously stated, in order to contribute to the designers’ activities we developed a GIS module (TMM) where it is possible to set a number of events for featuring emergencies, such as the position of evacuees and the condition of some buildings. An important characteristic of this system must be the ability to describe evolving events made up of successive sub-events. To this aim, the system embeds a temporal GIS where data consists of a spatial and a time component.

On the client side, the main components are Framy, the shared spreadsheet and the AR module. As for Framy, a fundamental requirement is that the trainee should be able to comprehend changes also in terms of temporal crisis evolution. Thus, colored frames are used to represent comparative views of the same zone. This capability is important from a training point of view to acquire a complete browsing experience useful to improve responders’ familiarity with new technologies. Details about the aggregate values associated to each sector, such as the number of POIs and their distances from specified locations, may be required by tapping on the corresponding sector. The prototype featuring the present version of Framy has been developed by using Google API for the Android platform. It represents a framework specifically conceived for geographic mapping visualization and allows users both to download maps from Google Maps and to manage many typical GIS operations usually required from navigation devices. Moreover, based on tactile input and non-speech sound output as alternative interaction modalities, the prototype also offers a more appropriate interaction for users who experience difficulties due to specific environmental conditions [13].

The AR module exploits Link2U. Here, trainees can exploit both mobile devices and laptops, which are commonly provided with an integrated camera for video-image capture, a Global Position System (GPS) device to detect the position, a compass and motion sensors to detect user’s point of view. In an AR-enhanced LiveMode view, visual metaphors are superimposed on the image captured by the camera phone where phenomena and objects of interest are visualized.

Finally, as explained in Sect. 3, the collaborative work is based on the shared spreadsheets, where the communication between the central application SIRIO and the mobile application exploits web services based on Apache Axis 2 and Apache Tomcat. Further details can be found in [5].

6 Conclusions

Starting from a productive field trail acquired in the domain of the emergency response, in this paper we propose an innovative approach to address the general concern of training emergency responders, which integrates some recent results from the field of information visualization, spatial data management and human-computer interaction. In particular, in this paper we show how an advanced visualization technique embedding visual summary metaphors could be integrated with AR functions in order to support responders in acquiring familiarity with new technologies and thus be well prepared to face health, security and managerial concerns in agreement with the protocol established by the Civil Defence Agency.

The proposed system aims to make the trainee with a scenario where an emergency situation is simulated. Through the interaction with a mobile device, the trainee is requested to perform some specific tasks that might result difficult to perform during a real situation where people are under stress. Such scenarios can be customized according to the skills of each trainee, thus bridging the gap between a specific technology feature and the responder who is going to use it.

An initial analysis of some demonstrator training sessions has confirmed that the AR functionality supports trainees when building their personal mobile experience with new technologies. They benefit from a shareable low cost “ubiquitous learning” thanks to the pervasiveness of the necessary hardware. Moreover, the involvement of professionals and volunteers in designing personalized training sessions reveals to be important in order to obtain a higher level of matching between a virtual content and a real emergency situation.