1 Introduction

Mars has been a major topic for most space agencies around the world, gathering much of the attention and funds. However, the way in which the interested parties collaborate in mission planning and operational meetings is still far from ideal. At present, these multidisciplinary tasks are carried out by different geographically dispersed teams of varying fields of expertise (geologists, atmospheric scientists, engineers, etc.) that collaborate to obtain a particular outcome [17, 20]. This collaboration consists of several physical meetings in which a topic is discussed (e.g. landing site selection, the decision about the rover path on the surface, etc.) and the relevant data for each team is gathered before they disperse again to their original locations where their own tools are used for planning, processing and analyzing the data. During this time, the communication between teams is limited to email and videoconferences, thus hindering the collaborative exploration of challenges and potential solutions. This is mainly due to the fact that these discussions do not take place within an integrated information space that represents the true nature of the planet condition but through disjointed datasets which are in the form of images and graphs. Typically, there are no more physical interactions until the next meeting, which usually takes place several months later, hence adding delay and cost to the overall mission. Therefore, there is an urgent need to explore an appropriate platform that can support collaboration among the remote expert teams involved in space mission planning.

This need has been addressed by the European Union funded project CROSS DRIVE [15], with a consortium consisting of atmospheric scientists, geologists, engineers, computer scientists and industrial partners involved in International Space Station and rover operations.

This paper presents the development of a collaborative mission planning platform developed by the CROSS DRIVE consortium that allows space scientists and engineers to come together to interactively plan future missions within an immersive virtual environment. The vision that was attempted to realize in building this platform was to simulate the illusion of being “teleported” to Mars to jointly plan future missions by combining information rich 3D models of Mars with advanced immersive Virtual Reality (VR) technology. In this simulated environment, the team members will be able to meet in the same spatial and social context [40]. In this shared context, they will be able to build a common understanding, explore scientific data available within the virtual Mars model, make critical decisions on a safer landing site, make important scientific investigations during the mission, test safe rover manipulations, etc. This paper presents the technical architecture of the virtual mission-planning platform that was built to realize this vision. Specifically, it investigates the important functional characteristics of a software framework that can support heterogeneous discipline experts to come together to conduct future mission planning exercises for Mars. The paper attempts to answer the following research question: What is the nature of a system architecture that can support collaboration among multidisciplinary teams during planning and operation meetings for space industry and research?

This paper is structured in the following way. Related work is discussed in Section 2. In Section 3, the research method and the approach followed is described. Section 4 provides an overall view of the problem, its relevance and the main research contributions. Section 5 focuses on the design and development of the system architecture, while Section 6 outlines the validation carried out during the whole project. Finally, Section 7 presents our conclusions and the future work.

2 Related work

Team meetings play an important role in planning and delivering complex projects in order to support communication among team members and coordinate parallel team activities [16]. For that reason, Computer Supported Collaborative Work (CSCW) has been intensively investigated during the last decades [42]. Several tools and frameworks for developing virtual environments, such as VRJuggler [7], COVEN [38], AfreeCA [32] and Cospaces [3], have been developed to explore virtual meeting environments based on distributed Virtual Reality (VR) technology. Whilst these platforms have successfully demonstrated the potential of constructing distributed platforms for creating virtual meetings for remote teams [10], they have not given much attention to the industry context, requirements for multi-disciplinary team interaction, task analysis and the richness of the data required for conducting appropriate team activities, especially within the context of space exploration.

Similarly, there has been much research that attempted to explore various spatial metaphors and user embodiment techniques to enhance social interaction in virtual meetings. For example, Benford and Fahlén [6] describes a conference table designed to show the capabilities of the spatial model of interaction (SMoI). Bowers et al. [8] tried to evaluate virtual meetings using conversation analysis to identify turn taking and participation limitations. Even though they used expressionless embodiments, it is concluded that they have an important role in social interaction. More recently, Martinez et al. [32] replicates the traditional conference room example, but this time using a model of interaction that overcomes some of the deficiencies of the SMoI. However, all these examples are about unstructured and general-purpose meetings and do not focus on structured meetings in a real industry context.

Given the importance of the user embodiments, research in telepresence technologies has tried to improve social interaction in collaborative environments [50]. One of the approaches in that direction is the use of 3D reconstructed video for communication, creating real time avatars from several video streams [14]. This provides a faithful representation of the user that is able to transmit appearance, attention, action and non-verbal communication [41].

The technology supporting most of these developments is known as Collaborative Virtual Environments (CVE). They are complex distributed systems that must face several challenges to become usable products. Examples of these challenges from the point of view of the user experience are described in [13], and some of them, pointed out about 18 years ago, have not been satisfactorily solved yet. To add more difficulty, building systems by gluing together components that may work as solutions to individual problems is not guaranteed to work as a compound [51]. Therefore, building system architecture for CVEs requires especial care and attention.

CVEs usually rely on distributed architectures to provide interactive virtual environments to geographically dispersed users. However, there is no agreement in which the right architecture for these systems is. Several types of general-purpose distributed architectures have been proposed in the literature, from the classic client-server and layer-based, to the modern service-oriented and cloud computing [48]. Collaborative applications in different fields have used some of these types of architecture. Maher et al. [31] describes a prototype of a system for multidisciplinary collaboration, it is basically a conceptual design tool using SecondLife and web-based extensions that allowed multiple representation of objects, ownership, etc. However, this kind of approaches (based on generic virtual world systems) is not adequate for the purpose of the current paper as immersion and advanced visualization techniques are required. Moerland et al. [36] describes a distributed platform for collaborative aircraft design. The functionality and tools are easily distributed using a service-oriented approach to the places where the experts in one discipline reside, and the results sent to the following tool in the procedure workflow. This contrasts with the type of meetings described for this paper, as our work is mostly exploratory and, even though meetings in CROSS DRIVE have some structure, they are not that highly structured nor follow a clear and pre-established workflow. However, the way the tools are geographically distributed facilitate the management of the services. Another example is [28], which in this case uses a five-layer architecture for a distributed system for risk assessment using VR. The layered architecture reduces software complexity, simplifying dependencies by grouping logically-related components in layers, similarly to the architecture described in [34].

We explored the use of these architectures, studying the system from different perspectives in the search of a sound solution that is explained in depth in Section 5.

3 Research method and approach

This research has followed the design science research methodology proposed by [24] because it seeks to provide effective and efficient solutions to domain specific problems in the form of information technology artifacts while ensuring theoretical foundation, scientific rigor and validation. Design Science approach was originally described as 7 guidelines in [24] and presented as a methodology in [39]. This research follows a 3-phase approach depicted in Fig. 1 which is similar to the one used in [5], and is based on the guidelines established in [47].

Fig. 1
figure 1

Design Science approach adopted [5]

As shown in Fig. 1, Phase 1 is focused on problem identification and include guideline 2 (problem relevance) and guideline 4 (research contribution). In this initial phase, the importance of the problem is made clear by describing inherent domain challenges and proposing a potential solution approach that makes a contribution to the problem domain. After the problem has been identified, Phase 2 (artifact design and development) provides a technical solution following an engineering design and implementation process. This phase includes guideline 1 (design as an artifact) and guideline 6 (design as a search process). After the artifact that provide a solution to the problem has been developed, Phase 3 is used to evaluate it in order to demonstrate the effectiveness and completeness of the solution. Baur et al. [5] situates guideline 7 (communication of research) after the three phases to enable researchers to build a cumulative knowledge base for further extension and evaluation [24]. Also in Fig. 1, guideline 5 (research rigor) emphasizes the need for rigorous methods in the construction and evaluation of the artifacts thought out the entire research process.

4 Problem identification

This section addresses the first phase of the design science methodology by describing the relevance of the problem and establishing the main objectives of the proposed solution.

4.1 Problem relevance

The introduction of this paper (Section 1) already articulated the limitations of the current team meetings involving space scientists and engineers in space mission planning. Due to the fragmented nature of the data and the simulation tools, multi-disciplinary discussions during space mission planning meetings are inefficient, introducing delays and increasing costs to current space mission programmes. Therefore, there is a need for a collaborative mission planning platform that can allow space scientists and engineers to come together to interactively plan future missions.

The solution that is being explored within this project is the creation of a collaborative virtual environment that allow distributed experts to meet within a virtual representation of Mars using immersive technologies. The virtual Mars model should be based on a semantically rich information model and should offer access to necessary intelligence as well as simulators and physical rovers to conduct various scientific and operational investigations and team discussions.

In order to elaborate the business requirements for the collaborative virtual environment, three use cases, which were based on key mission planning activities, were defined in conjunction with the scientific and engineering partners of the project, as they are the typical final users of the system. The three use cases defined in this research are 1) Landing site characterization 2) Mars atmospheric data analysis and 3) Rover target selection. After analyzing a wide range of possible scenarios, these use cases were selected since they represent a good mix of data analysis requirements, probe operations and close collaboration tasks between scientists and engineers in mission planning operations. These use cases allowed the domain experts and the computer scientist to collectively capture the challenges faced during mission planning and operational meetings and define the nature of the future mission planning environment. Furthermore, these cases were instrumental in implementing a co-creation approach to incrementally and iteratively define, develop, validate and refine the overall space mission planning platform. For the sake of avoiding unnecessarily extending the length of the paper, the following paragraphs only describe the rover target selection use case, which in fact includes and extends the functionality developed for the other use cases. The rover target selection use case was divided into two main events: scientific characterization of the rover landing area and rover path planning.

The scientific characterization of the rover landing area use case starts by engineers analyzing the orbit of the spacecraft covering the area. At this level, low resolution but full planet coverage datasets are required for the terrain representation, and the composition of the atmosphere needs to be available to be studied to explore the landing trajectory of the spacecraft. After this, the focus is moved to regional coverage, using more detailed terrain datasets used by the scientists to explore a suitable landing area on the terrain. Finally, the focus is set to local coverage, based on high resolution data, at the place where the rover is planned to land on the Mars surface. The site selected for the use cases is the Gale Crater, since a rich set of information is available for the scientists from previous missions. Once landed, the status information about the rover is requested and analyzed to get a preliminary evaluation of the capabilities of the rover with respect to its mobility and the visible areas. In order to ensure that the commands for the rover could be issued and its operations could be tested, this use case used the Mars and Moon Terrain Demonstrator (MMTD) facility located in the mission control center in one of the partners facilities (Altec). This MMTD offered a physical representation of a Mars terrain of 20x20m where prototypes of the ExoMars rover are being tested.

The rover path planning should use the simulated terrain in front of the rover, identifying both places of interest and possible hazards (soft soil areas, rocks etc.). At this point, a set of paths showing interesting features of the terrain are calculated. A selection of these paths is simulated using the virtual rover by the team and the most appropriate path from the point of view of the operational scenario is then simulated in the physical MMTD facility. The images generated by the physical rover and its telemetry data are sent back to the collaboration platform for assessment.

4.2 Requirements extracted from the use cases

By analyzing the use cases and through co-creation workshops with the scientists and the engineers, the following list of system requirements were extracted:

  • System should support different types of meetings with different objectives to cover the full range of activities identified in the use cases of the project.

  • System should support different types of users such as core users (Mission Director, Scientists, Engineers) as well as external experts who are invited as needed with limited access rights. It should support minimum of 8 users connected simultaneously.

    • Core members should be able to connect via their immersive display systems and external users via their low-cost computers.

  • System should provide access to a range of available data including Mars terrain and atmospheric data, rover and satellite.

  • System should offer range of rendering techniques such as 3D rendering, volume visualization and 2D Graphs to visualize terrain, atmosphere and simulation data.

  • System should offer a range of tools for annotation, measurement, data clipping and slicing within the Mars 3D environment.

  • System should offer simulation of the rover on Mars surface for operative sessions and connect the rover simulator to the physical rover in the MMTD facility.

  • System should offer user presence through virtual avatars and should allow them to navigate, interact and discuss scientific and operational matters through audio channels.

  • System should provide the ability to connect with simulators remotely running on high-performance computing clusters and visualize their results in the immersive environment.

4.3 Research contributions

The main areas in which effective design science research projects is expected to provide contributions are design artifact, design construction knowledge (foundations) and/or design evaluation knowledge. In this research project, the main contribution is the design and implementation of a collaborative virtual environment that can be used to support space data visualization and mission planning involving a range of scientists and engineers. The overall project addressed many challenges such as:

  • Integration of disconnected remote sensing datasets to create an integrated 3D model of the Mars planet;

  • The management of level-of-detail control of the massive planet model to offer real-time interaction within an immersive distributed VR environment;

  • Access to remote compute services;

  • Tele-immersion for enhanced user presence;

  • Management of parallel team meetings within a single platform;

  • Tele-operation with the rover on the MMTD facility, etc.

However, the main contribution of this paper is the detailed analysis of the nature of the collaboration platform that is necessary for supporting team collaboration in space mission planning. To this end, this paper has used techniques such as use case evaluation and co-design activities involving end users to extract the functional requirements for an ideal mission planning system. Furthermore, this paper presents a detailed discussion on the technical architecture and an implementation of this technical architecture that is built upon the functional requirements identified in this study.

5 System architecture design and development

This section describes the design and development of the collaborative virtual environment for space mission planning that fulfills the user requirements identified in the previous section. In the search of a sound solution, several options for the design of the collaborative platform were considered, as it is a complex task that requires effective system architecture to support collaboration. In general, system architecture is the conceptual model that defines the structure, behavior and views of a software system [27]. Different set of views are typically used in order to break down the complexity of designing software systems [25, 30, 44]. The main idea behind the use of views is to restrict the attention to certain aspect of the system, ignoring others that will be addressed separately [6], as it is not possible to describe a complex system from just one perspective [9]. System architecture designers are advised to first identify the set of views relevant to the system being designed [9, 26]. For this research, we first focused on the conceptual design of the system, using a set of views based on [12], which extends the views of the Collaboration Lifecycle Management proposed in the Collaboration Oriented Architecture (COA) framework [22]. These views were used to cover the activities described in the project use cases as well as to further elaborate the user requirements and identify functional characteristics of the collaborative mission planning platform.

Another common approach to describing software systems is by using architectural patterns [46], such as the layered architecture, which uses layers or tiers to partition the concerns of the application. In our approach, the conceptual system views were mapped into a three-layer architecture (presentation, service and data) within which functional modules were defined and grouped in each layer.

5.1 Conceptual system design based on system views

The conceptual system design for the collaborative virtual environment is defined using the following views: Team Members View that captures user roles; Workspaces View that captures different spaces to allow collaborative and individual work; Meeting Process View that captures the structure of meetings; Communication View that presents the way users communicate with each other; User Interface View that is based on the user context; Activities and Tools View that identifies tools; and Information View that captures data required for mission planning tasks (see Fig. 2). These views were proposed to reflect the collaborative process and activities during team meetings for mission planning exercises and are discussed in detail in the following sub-sections.

Fig. 2
figure 2

The system conceptual views

5.1.1 Team members view

The Team Members view describes the types of users involved in team meetings, taking into account the roles, responsibilities and meeting objectives for each individual during the collaboration process. The roles identified are summarized in Table 1.

Table 1 A summary of the team members’ profiles including roles, project responsibilities and meeting objectives

The typical meetings in space planning and operation are based on a turn-taking strategy. The main actor in these meetings is the Mission Director (MD) who is acting as the chair of the meeting and giving the floor to the users so they can share their results. The Mission Director is typically located in the mission control center. The second type of users in these meetings are the scientists and engineers who will be joining the collaboration platform from their remote locations to contribute to the meetings from their own expertise. These users (MD, scientists and engineers) are considered as “core users” with high security clearance to access data and sessions as they are part of the industry consortium that are responsible for delivering the overall space mission program. These core members frequently seek advice from external scientists to interpret certain data or help them with simulation or operational planning. These external scientists enrich meetings by bringing specific knowledge to discuss a particular scientific subject. However, external experts are only exposed to a restricted amount of information and therefore require a special interface to engage with collaborative meetings using their own computers rather than a fully-fledged VR environment. As a result, the need for a 2D visual interface which makes selected set of data available to the external users was identified as another key functional requirement.

5.1.2 Workspace view

Workspace view manages the team space during a collaborative session. All participants in the collaborative sessions will share the same virtual space, but their participation will be moderated by the Mission Director. This way, two workspace views have been envisaged: a Private Workspace view, in which participants are free to move and interact within the virtual Mars model as well as execute different analysis in parallel; and a Team Workspace view in which one user (“presenter”) shares their simulation results and some key findings from his/her experiments to the rest (“audience”), or express his/her expert opinion on a particular issue. However, before any user can become the presenter, they need the permission of the MD to take that role. This Team Workspace view can be extended to support the idea of forming groups by replicating the presenter-audience metaphor to conduct specific joint explorations. In this context, the MD is able to create special interest groups according to the needs of the current session, which are independent from each other. However, in such instances the MD is not required to be the chair inside a group but allow the group to decide how the role of the presenter is decided. The results obtained in the private/group views will be shared within the entire team only if considered important for the discussion. If nothing of interest is found, users will be able to erase their settings and go back to the initial state to start with new analyses. Figure 3 shows the evolution of a session with seven users working in parallel (a) to three independent groups (b). This workspace management structure was identified to support different styles of working patterns desirable in the mission planning meetings.

Fig. 3
figure 3

Session before a and after b three groups are created by the MD

5.1.3 Meeting process view

By analyzing the three use cases and through co-design activities with the end users, two types of different meetings were identified based on its objective: 1) Science Meetings, which focus on data comparison and simulation results, and 2) Operative Sessions, which focus on rover operations. Each of these meetings involve two types of activities: a) Individual or Group Exploration activities to conduct detailed simulation studies or rover operational testing, which are typically time consuming due to heavy simulation time or rover testing times; and b) Team Presentation activities, which focus on purely presenting the outcome of the previous exploration activities, such as simulation results or rover manipulation results, to the entire team. Figure 4a presents the workflow of the exploration activities, while Fig. 4b presents the workflow of the team presentation activities. In a typical scenario, meetings start with an introduction from the MD who then invokes either a presentation session to discuss the pre-computed results of the science, or an operative session or exploration activities for individuals or groups to assess various scientific or operative aspects.

Fig. 4
figure 4

Workflow of the exploration activities a and workflow of the presentation activities b both as part of the Meeting Process view

The science meetings are designed to compare the archived datasets with data coming from simulated models. Typically, simulations are time consuming and demand computing power, hence computed on remote dedicated servers. Therefore, such simulations are conducted by the experts in their private workspaces and brought to discussions during the presentation phase of the team meetings. Similarly, the objective of the operative sessions revolves around rover operations. This includes collecting and analyzing telemetry data coming from the rover and deciding the list of tele-commands to be sent to the real rover to be executed. Once the list of tele-commands is decided, they are submitted by the MD, as this is the only user with direct access to the real rover.

5.1.4 Communication view

Typically, tele-conference systems are used to reproduce face-to-face meetings. While current tele-conference systems are now well matured to support greater interaction between remote teams to share 2D information and discuss issues, they do not allow remote teams to be presence in the same 3D environment and conduct complex scientific and engineering tasks. This hinders team work in applications such as space mission planning where much greater understanding, communication, joint exploration and discussions among the team is important for making sound decisions regarding landing characterizations, complex rover manipulations and explore atmospheric conditions. Therefore, in order to provide a more natural way to communicate, this research project decided to explore the use of telepresence technology [41] to provide a high-fidelity 3D representation of the users in real time with the main aim of creating realistic face-to-face meetings. The idea here was to reproduce all the communication cues (audio, visual, body expressions, facial expressions and gestures) that we enjoy in face-to-face meetings. The interested reader would be able to find a detailed description of the telepresence aspects of this project (physical setup, algorithms and evaluation) in [11], as the focus of the current paper is on the software architecture supporting the whole system. Figure 5 shows a 3D reconstructed user waving at two collaborators, one local and other remote (represented by a traditional avatar).

Fig. 5
figure 5

Prototype of the telepresence system showing one 3D reconstructed user, a traditional avatar and a local user

5.1.5 User interface view

The user requirements demanded two types of user interfaces; fully immersive VR interface for the core users (MD and scientists and engineers) and a 2D interface for the external scientific experts. The former should provide access to the complete functionality of the VR system, while the latter should provide reduced access to datasets and functionality. In order to support the fully immersive experience for the core users, the virtual environment should support display technologies such as Powerwalls, CAVEs, and HMDs with body tracking (especially head and hands) and 3D interaction devices for navigational and object interaction tasks. In our research, ray-casting [35] interaction technique is used in conjunction with a virtual-joystick, similar to the hand-directed movement technique described in [35] as a navigation technique.

The external interface is designed for common desktop PCs, providing reduced interaction with the core system. The main idea behind this is to allow the external system to be executed on a wide range of PCs without the need of high-end computers. Therefore, the external interface is based on the windows metaphor and makes use of standard keyboard and mouse interaction. The 3D models of Mars are replaced by 2D maps that can be explored in a similar way to Google Maps.

5.1.6 Activities and tools view

The activities and tools view identified the tools that are required for supporting team members’ activities during a collaborative team session. Three different groups of tools were identified from the analysis of the scenarios (Figs. 6 and 7):

  • Data Exploration Tools: The data exploration tools were divided into two categories, terrain and atmosphere. The terrain tools allow the user to show or hide various datasets available, exaggerate the height information of the terrain for easy exploration, draw contour lines at configurable intervals, and colour-code the terrain regarding the topography (elevation, slopes, etc.). The atmosphere tools allow the user to visualize various atmospheric data using volume rendering, iso-surface visualization, data slicing and clipping, hide & show various data elements, visualization of 2D maps to illustrate simulated or measured data and altitude exaggeration for easy exploration purposes.

  • GIS Tools: The GIS tools allow drawing annotations on the terrain or the atmosphere using different shapes, arrows, text, ellipses and polygons during private or team exploration activities. Moreover, these tools can be used to measure distances (Euclidean or taking the topography into account).

  • Engineering Tools: The engineering tools provide the functionality to interact with the rover and satellite simulations, as well as to interact with the physical rover on the MMTD.

Fig. 6
figure 6

Tools available for core users

Fig. 7
figure 7

Tools available for external users

Due to the restricted access imposed on the external users, the system has to control the type of activities they could perform. In the current implementation, the tools that were made available to these users are presented in Fig. 7.

5.1.7 Information view

Information view provides a definition of the data from different sources and how they can be brought together and managed during collaboration. There are two main groups of data used in CROSS DRIVE: datasets about Mars and real-time data exchanged by the users. Regarding former group, the datasets have been adapted to use the same reference system, so they can be combined. The list of datasets used in the project is:

  • Engineering data (rover and satellite):

    • Mars Science Laboratory and Mars Exploration Rovers (MSL/MER) NASA images (archived) taken by the NASA rovers on Mars.

    • MMTD images (archived and taken in “real-time”). They consist on camera images, thermal images and stereo images of the MMTD facility.

    • Orbits of satellites (timestamped positions) used to contextualize the rover position and the terrain and atmospheric data.

  • Scientific data:

    • Mars geology and geodesy:

      • MOLA: Mars Orbiter Laser Altimeter [45]. Consists on digital terrain model (DTM) with low resolution but almost full planet coverage.

      • HRSC: Highresolution Stereo Camera [23] mounted on Mars Express. Consists on DTM and orthoimages of mid-level resolution and a limited coverage.

      • HiRISE, CTX and CRISM: High Resolution Imaging Science Experiment, Context Camera, and Compact Reconnaissance Imaging Spectrometer for Mars [33]. These three instruments are usually operated in parallel, obtaining data that is nested. They consist on DTMs and orthoimages with higher resolution but low coverage.

      • SHARAD: Shallow Radar [43]. Consists on subsurface radargram images.

    • Mars atmosphere:

      • BGM4 [37]: GEM-Mars global climate model output 1 year reference run. Provides 3D fields (temperature, pressure, wind, air density, dust extinction, etc.), 2D fields (surface temperature and pressure, water ice opacity, etc.) and animated vectors (winds) based on simulated data.

      • PFS (levels 1 and 2) [21]: Observations based on the Planetary Fourier Spectrometer on board of Mars Express. These data can be used to generate different kinds of 3D (temperature profiles) and 2D plots (surface temperatures and aerosol opacities) based on real data observations.

      • Tohoku ground-based measurements [29]. Telescope observations. Consists on 2D plots (H2O, CO2, etc.) based on observations from Earth.

Regarding the second group of data, real-time data exchanged by the users, it is used to describe the user interaction. A protocol to exchange real-time data was created defining different types of message for session, user and object management, geological and atmospheric visualization, rover messages and remote computations.

5.2 System architecture

The previous section presented the conceptual views of the system architecture providing information about important views of the system. This section describes various components and their inter-relation using a 3-layered system architecture. Figure 8 shows how the conceptual views are mapped to the architecture layers. The following sections provide a detailed view of each layer.

Fig. 8
figure 8

Mapping of the conceptual views with the system architecture

5.2.1 Presentation layer

The presentation layer maps the user interface view and provides two separate interfaces for both the core users (left-hand side of Fig. 9) and external users (right-hand side of Fig. 9) to conduct their activities in a collaborative manner without compromising the data sensitivity issues. Both figures show the same datasets and annotation objects displayed on the two user interfaces.

  • Core User Interface Module: The core users are the participants that use the Virtual Reality facilities. This module offers an immersive user experience via stereoscopic visualization and body tracking capabilities. Once immersed, the users have access to a 3D interaction device (a flystick in current implementation) with a set of buttons to execute various tasks such as select a dataset, draw a rover path or create a landmark through a floating 3D window. This floating 3D window metaphor allows the selection and combination of the different datasets in an easy way since mapping all the actions to the flystick buttons would not be possible (see left side of the screenshot showed for the core user interface in Fig. 9).

  • External User Interface Module: This module offers a 2D representation of the area of interest to the remote external user and allow him/her to explore the area using a limited set of tools described in the previous section, using a 2D interface based on screen, mouse and keyboard. This module is intended to run on low end desktop or laptops and therefore the amount of data shared with this module needs to be controlled to allow real-time interaction. However, the external users share the same area of interest with the core users to carry out collaborative discussions and data exploration.

Fig. 9
figure 9

User interface for core users displaying TES atmospheric data on top of MOLA terrain data (left). User interface for external users displaying TES atmospheric data on top of MOLA 2D map (right)

5.2.2 Service layer

The service layer encapsulates the functionality captured through the Activities and Tools Views, Meeting and Process Views, Workspace View, Team Members View and Communication View, as shown in Fig. 8. This layer provides the services to be consumed by the presentation layer, which can be grouped into three categories: visualization services, remote computational services and collaboration services:

  • Visualization services: These services provide the functionality to visualize the Mars data and allow the users to interact with the virtual environment and perform their exploration tasks. For the data visualization, this research deployed the terrain visualization framework [51] and VERITAS [4]. Furthermore, the data exploration tools and GIS tools that were described under the Activities and the Tools View under Section 5.1.6 were integrated into these visualization systems.

  • Remote computational services: This group of services refers to the required computation tools and to the rover real time system that are necessary during the private or group sessions described under Workspace View and the Activities and Tools View. An example of this is the MMTD rover path planning service, which calculates the optimal path for the rover to travel to a point of interest by taking the topology of the terrain into consideration. Other simulation services considered in this project include the integration of the ASIMUT tool [49] for atmospheric simulation. These services are geographically located in the facilities of the partners responsible of the tools in order to facilitate their management (similar to the service oriented approach of [36]).

  • Collaboration services: These services represent the functionalities presented under the Meeting Process View, Workspace View, Team Members View and Communication View. This group of services is responsible for managing the collaborative sessions, the workspaces, the network distribution, and the communication between users. This also contains the low-level technology-centric aspects about the network architecture and the distribution approach used. This approach is discussed in Section 5.3.

5.2.3 Data layer

The data layer provides the data access service for the service layer to store and retrieve different types of information corresponding to the Information View.

Regarding the scientific data, the terrain datasets are optimized for visualization using the HEALPix tessellation [19]. The atmospheric datasets are converted and stored in the VTK (Virtual Toolkit) format [1] using the MOLA coordinate system as a reference system. The interesting thing about getting all these datasets in the same reference system is that this opens the door to make comparisons. For example, at some point in Use Cases 2 and 3, the Tohoku ground-based observations, PFS (satellite observations) and BGM4 (model) are compared while geographical information is still provided by MOLA and HRSC.

Regarding the engineering data, the MMTD images consist of a library of images taken by the real rover in the MMTD facility, in a similar way to the MSL/MER library of images taken by the NASA rovers on Mars. The orbit data consists in timestamped positions of the natural and artificial satellites of Mars. Therefore, it is possible to travel back in time to the particular date when an observation or picture was taken and check the position of the satellites and the rovers on the surface on that date.

5.2.4 Security

Security is an important aspect of the overall system, since some of the data is only accessible to the core users. Therefore, security mechanisms need to be applied to all the architecture layers, especially within the service layer, since it is where most of the services that access archived data are available and where the network connections are managed. The system architecture is depicted in Fig. 8 as “layers with sidecar” as described in [9], meaning that each layer can use security features.

5.3 Architecture deployment

Figure 10 shows the physical realization of the architecture in several remote locations. Core users can have different types of VR installations based on technologies such as CAVEs and PowerWalls giving varying degree of immersion. Some of the nodes could be dedicated to scientists in their science base and some for engineers at their engineering support centers. The main node is the mission control center, where the MMTD, the central archive and the Mission Director are typically located. Each node is composed of the user interface (for core or external users), the visualization system that is responsible for the rendering of the scientific data, the local archive, and the collaboration manager, which is responsible for maintaining the connection, the session and the message exchange. The local archive maintains a copy of the scientific and engineering data necessary for conducting the mission planning tasks.

Fig. 10
figure 10

Deployment of the system architecture at remote locations depicting three remote centers (left), two external users (down-right) and the mission control center (up-right). Arrows show the communication through the network (arrows between telepresence server and clients, and between archives are removed for clarity)

The overall system makes use of a hybrid network architecture approach in which all the user and session management messages are sent using a client-server architecture, while the user and object positions are sent using a peer-to-peer architecture to provide faster response in interaction tasks. The messages exchanged are encrypted using an asymmetric public-key cryptographic system so that just the allowed partners can read them. The server in the overall client-server architecture in this case is the CDServer located at the mission control center, which provides an additional level of security as the CDServer checks every message to make sure they are allowed at that time in the meeting. The CDProxy allows external users who typically have a random IP address to connect to the core system providing an additional level of security for external connections, as the CDServer can only be reached by the IP addresses of the core members of the consortium.

In order to support telepresence of the users, every core facility should have 3D user capture hardware to support 3D user construction. A separate peer-to-peer arrangement is supported between the telepresence clients in order to offer faster response. However, in the current implementation, it is only available in one of the nodes (OCTAVE at the University of Salford) [41].

Finally, remote computation servers can be accessed through the CDServer for compute-intensive simulations requests.

6 Evaluation

With regards of the design evaluation methods described in [24], the evaluation performed during the development of the artifact is observational. This evaluation was carried out mainly through the study of the artifact while it was being used by the end users during each of the three use cases created for its validation. These use cases were designed following an incremental approach. Since the purpose of this project was to develop a system that can be used in current and future European missions, the use cases were based on relevant and common scenarios on space science and engineering, designed with the help of the end users of the consortium.

The use cases were used for a functional validation of the development of the system. In these validations, the end users (as experts) tested the system to assess if all the functionality and actions described in the use cases could be performed.

The evaluations tried to gather as many end users within the project partners as possible in order to get feedback that could help to improve the system. Four expert users took part in use case 1 joining from two science home bases, one located in DLR (Germany) and the other in the University of Salford (UK), one engineering home base located in TASI (Italy) and the mission control center located in Altec (Italy). The use cases included the use of a different range of VR displays (from PowerWalls to the OCTAVE) and interaction technologies (mainly optical systems using passive markers for head and hand tracking, and joysticks). The remote facilities were linked using CROSS DRIVE’s distributed architecture and had an audio connection so that the participants could discuss the mission and tasks. For use cases 2 and 3, other core and external users joined as atmospheric experts from BIRA (Belgium), INAF (Italy) and Tohoku University (Japan), making a total amount of 8 users connected simultaneously (which coincides with the minimum number of users as stated by the system requirements in Section 4.2).

Figure 11 shows pictures of each use case validation in different rows: use case 1 (a), use case 2 (b) and use case 3 (c). For use case 1, the objective was to study the Gale Crater area from the geology point of view in order to find a safe landing site. The pictures show the detailed description of the terrain around the Gale Crater carried out by a geologist in the VR facility of DLR by combining different datasets (MOLA and HRSC), while other scientists attend to this description from Salford and TASI. The set of terrain and GIS tools described in Section 5.1.6 were used during this validation. The picture on the right shows some plotting and measuring capabilities as the scientists obtain height profiles at different points of Gale Crater.

Fig. 11
figure 11

Validation of the system: pictures and screenshots of a use case 1, b use case 2 and c use case 3 demonstration sessions

For use case 2, the focus was on the visualization, analysis and discussion related to state of the art research on Mars atmosphere. The objective was to explore the landing site location using global views of Mars to analyze concepts related to the atmospheric temperature fields, suspended dust and ice, global circulation, and dynamics. Data coming from models is compared to real observations from Earth and satellites, this helps to see the structure of the atmosphere in any particular day. So, an entry, descent, and landing can be later studied. The middle row screenshots (b) shows different atmospheric datasets being displayed, from left to right, 3D temperature fields from PFS satellite observations, ice opacity using ground observations, and volume rendering of ozone using BGM4 model.

Finally, use case 3 was focused on the visualization and analysis of the engineering data related to the operational phase of a robotic mission. The story behind this use case was to plan rover operations in the previously selected landing site. Therefore, it included the tasks for use cases 1 and 2 and added simulated rover operation and the transmission of telecommands to the real rover on the MMTD facility (simulating the rover on Mars). The bottom row (c) of Fig. 11 shows the Mars Express spacecraft orbit over the terrain under study. After this, the activities described for use cases 1 and 2 were carried out before starting the rover path planning in the simulated terrain (middle picture). Finally, the third picture shows the view of a camera located in the MMTD while the real rover went through the path defined in the simulated environment.

6.1 Results of the observational evaluation

We used different techniques to get feedback from the end users. Namely, we encouraged them to think aloud during the validations, observed how they coped with the system and interviewed them afterwards. The execution of the use cases demonstrated that the system performed properly, supporting the distributed interaction among users.

During the validation of the first use case, we noticed that it was difficult for some users to navigate to the region under study (some of them had none or very little experience with VR devices and displays). To solve this problem, the possibility to travel to a set of predefined locations was included, as well as to the location of any GIS element created on the surface. This was particularly helpful, as one experienced user could create a landmark on the terrain, name it, and ask the rest of the team to click on its name to be teleported to that location. Moreover, it was hard for them to see these GIS elements (i.e. landmarks on the terrain) from a planetary view, as their size was fixed. This was solved by making them scale with the distance, so they had a fixed size regardless of the distance from the spectator.

During the second use case, the scientists identified that it was not easy to get used to the combination of buttons designed to perform most of the actions, as they increased significantly from use case 1. This lead to a redesign of the interaction, which ended up including a floating menu in front of the user (as can be seen on the left-hand side of Fig. 9).

Finally, the third use case provided feedback on functionality that would be interesting to include in future work. For example, some users suggested that it would be interesting if the pictures taken by the real rover on the MMTD were included in the virtual environment to enrich the system with data coming from the real world (in a real mission, these data would come from the rover on Mars). This could also include the 3D generation and placement of the terrain in front of the user using the stereoscopic camera mounted on it.

In line with this use case based evaluation, the system was showcased and the uses cases re-executed during the final workshop of the project that took place on Altec’s facilities (Italy) in November 2016 (Fig. 12). This event gathered members of ESA and NASA as well as European Commission reviewers that validated the system and provided useful feedback.

Fig. 12
figure 12

Picture of the final workshop of the project showing a scientist describing atmospheric aspects of Mars

Apart for this observational validation with end-users based on case studies, a formal experimental evaluation studying the usability of the system is foreseen as future work.

6.2 Comparison with other virtual meeting systems

Due to the particular characteristics of the CROSS DRIVE system, it is not easy to compare it to other virtual meeting solutions available. One of its main characteristics, the visualization of geographic and atmospheric data is not available in any other virtual meeting environment.

Nonetheless, Table 2 provides a comparison of CROSS DRIVE with 8 other virtual meeting systems that are currently available. As the table shows, no other solution provides support for the visualization of large scale data, 3D avatars reconstructed from video, full awareness of non-verbal behavior (NVB) or the connection to physical systems. However, other platforms provide functionality that is not available within CROSS DRIVE, such as support for mobile devices, video chat, the ability to load custom 3D models, the inclusion of a shared whiteboard or the possibility to draw in 3D space. These were not considered to be essential characteristics during the analysis and design stages, but would certainly help the communication of the users in some circumstances.

Skype is a well known and broadly used tool to hold online meetings. In fact, as it is mentioned in the introduction, it is currently used in space mission planning. However, even though it is able to convey a wide range of NVB, its drawbacks are apparent. The main reason is that the user is not immersed within the data, so it is hard to contextualize the NVB (i.e., eye-gaze).

However, Skype is not the only option, as a new set of virtual meeting tools arises coinciding with the arrival of consumer virtual reality headsets on the market. They focus primarily on spending time with friends, which limits its application, but some of them are advertised as the new way to hold business meetings online. This is the case of MeetInVR, a solution that shares some functionality with CROSS DRIVE. For example, the collaborative interaction support or the possibility to have private and public workspaces. Unfortunately, there are some limitations to its application to space mission planning meetings, such as the large scale (planetary) data visualization or its support for conveying NVB.

Table 2 Comparison of CROSS DRIVE with other virtual meetings systems

6.3 Discussion

The CROSS DRIVE project aimed at supporting the landing site selection for the ExoMars rover mission. As there have been few missions, a procedure for landing site classification is yet to emerge. Thus, characterizing landing sites is a very individual process and always highly adapted to the specified space mission goals. Luckily, very precise descriptions about NASA’s approaches for various missions, like 2020 Mars rover [20], InSight [18], Mars Science Laboratory [17], and Mars Exploration program [2] have been published. Little is published about ESA’s approaches (e.g. for Beagle-2 or Schiaparelli) but members of the CROSS DRIVE team participated at the landing site characterization for the ExoMars Rover mission. They reported small local teams working isolated in their own institutes on very specific scientific questions. Tele-conferences were organized to discuss progress and results of characterization issues and potentially good landing site candidates by sharing power-point presentations. We talked to involved planetary researchers about the potential of distributed interactive environments, like that offered by CROSS DRIVE, to improve collaborative landing site discussion sessions. A high demand was identified for interactive presentations of basic information (like elevation models) and derived surface characterizations, to leverage a common understanding of findings and open issues. On the other hand, CROSS DRIVE was considered to be much too complex to be supported by simple tele-conferences. Unfortunately, the space scientists already worked on the site selection as CROSS DRIVE came into play, so completed decision making before really making use if it. However, this closeness in timing allowed space scientists to imagine how CROSS DRIVE might have helped. They felt that meetings on virtual planets for planning future missions was very attractive.

An important prerequisite for uptake would be the reduction of the hardware resource requirements. Immersive virtual environments, like multi-wall installations, might be advantageous but much too expensive for sporadic use. With the availability of cheap head-mounted displays, virtual reality based collaborative sessions becomes much more affordable. Also augmented reality (AR) devices (like Microsoft’s HoloLens) might be integrated. In follow-up projects of CROSS DRIVE, teams are already working on the integration of AR devices and to tackle real-time issues accompanied with such wireless visualization systems. Eventually, this ends always in level-of-detail (LOD) techniques which adapt the complexity of the scene with respect to eye distance but also to the performance of the used hardware. This had been considered already in the development of CROSS DRIVE’s 3D visualization methods in order to maintain a usable interactive session for the scientists. The rendering is decoupled from the data processing. According to the hardware performance, the scene complexity has been increased iteratively up to the point where the frame rate drops below a threshold. This guarantied 60 fps stereo projection in interactive, immersive environments, whereas good visual results with minimum 30 fps in mono was achieved on less powerful laptops. A user adjustable parameter controlling the level-of-detail factor offers to manage the trade-off between frame rate and visual quality.

Figure 13 shows performance analysis as the view moved from orbit towards the ground. Thus, the resolution gradually increased reducing the maximum achievable frame rate. Performance is shown in terms of rendering, LOD updates, and user input handling. LOD updates include loading of requested terrain tiles from disk, and uploading or deleting them on the GPU. It can be seen that the software needs a warm up phase of around three seconds, after which it operates at peak performance. If vertical synchronization (VSync) is enabled, the software delivers constantly 60 frames per second because it is synced with the refresh rate of a monitor, which was 60 Hz during these measurements. The red curve shows the frames per second for the same scene but with deactivated VSync, demonstrating that the system is capable of higher performance than is normally considered sufficient for comfortable VR viewing, but the scale of this is proportional to rendering complexity. It is notable that Sony put a lower limit of 60Hz for certification of VR games. While it is not shown in this diagram, LOD update operations were automatically postponed to the next frame if the frame budget of 16 ms (60 Hz) was exceeded.

Fig. 13
figure 13

Performance analysis during a fly towards ground from orbit for 1024x768 as the screen resolution.The red plot shows performance achievable when not locked to the frame rate of the VR display

Although many desktop applications permit more precise map-based GIS tools, immersive environments can provide additional advantages over desktop systems when 3D perception and direct interaction is beneficial. Thus, we integrated sub-surface radar data from SHARAD (SHAllow RADar, instrument on Mars Reconnaissance Orbiter) for evaluating correlations between sub-surface profiles and the surrounding terrain. While in desktop applications, the radar image is depicted side-by-side with the terrain map, we placed the radar profile at the exact position orthogonal to the terrain surface. Additionally, the half side of the terrain between the user and the sub-surface has been drawn semi-transparent, which allows direct view to the radar profile and the terrain surface in the back. This approach directly depicts correlation of detected radar features and the continuation on the terrain. However, a correct perception is just possible with stereo projection.

Another tool we have implemented for virtual reality based environments has been the dip-and-strike tool. This helps to mark points on sedimentary rocks to specify connected stratigraphic levels. A plane is then automatically constructed consisting all marked points. Just in stereoscopic environments, orientation and inclination can directly be perceived and assessed. Additionally, the comparison with the result from a GIS tool (ArcMAP by ESRI Inc.) demonstrated the robustness of the implementation.

The planetary scientist confirmed significant advantages over tools they used so far on desktop systems. Beside the depicted approaches, they also found CROSS DRIVE tools to enable placing landing ellipses and landmarks, drawing rover paths, and constructing topographic cross sections (for slope analysis) highly helpful for geological landing site characterization. They also confirmed the quality of the measurements by comparing the result obtained from independent measurement software tools they normally use.

There are several ways in which our system could be improved. One way is related with communication in collaborative systems. There is a large amount of information exchanged between users during CROSS DRIVE collaborative sessions, mostly spoken, which makes it difficult to document or log what happens in them. If these conversations were automatically converted into text, the use of AI, including natural language processing tools, would allow the creation of reports for each session, extracting information of the progress, the decisions taken, the strategy followed, and so on. This would be useful to, for example, document the session for future references or dissemination purposes, or even to identify recurring problems that may require improvements in the system. The current user input interface is based on selection of 3D menu items, through a pointer. Alternative natural language interfaces could be developed that some might find more intuitive. Such might also make it easier for people to interpret what team mates are doing when controlling the system, although at the same time it could confuse conversation.

7 Conclusions and future work

The main contribution of this paper is the detailed design of a software architecture that can support multi-functional team collaboration for the space industry (science and engineering). Fragmentation of datasets and expertise leave little scope for collaborative activities in current space exploration and mission planning tasks. This paper details the investigation, design and development of a collaborative environment for multi-functional dispersed teams, to address this problem. This is done within the context of design science in information systems research methodology. The research question concerns the nature of a system architecture that supports team collaboration for space science.

This paper outlines the architectural design of a platform to support computer-mediated meetings. In these meetings, the scientists and engineers can be immersed into the data, interact in a natural way with the environment, and use simulation focused verbal and non-verbal communication between team members. The conceptual architecture is defined using a generic 3-layered architectural pattern enriched with the description of six system views. These views formed the basis for defining the system requirements and designing and implementing the final system architecture. The system requirements were elicited from the usage scenarios described in conjunction with the end-users.

The system was validated by three different use cases representing a wide range of common usage scenarios for the European space science (mainly ExoMars). Unfortunately, the need for expert users prohibited sufficient sample size for meaningful quantitative evaluation.

It is expected that the successful outcome of CROSS DRIVE will have a significant impact on how future missions, such as ExoMars, will be designed and validated; the way space scientists will conduct space science research in the future; the mobilization of the best expertise in various fields of science for the analysis and interpretation of space data; and in how distributed scientists and researchers will work together to engage in data analysis and interpretation.

Furture work could include the use of AI including natural language processing, both to gain information about how decisions were made and to make the interface more intuitive to some. Integration of head-mounted displays would provide a more affordable solution although hiding the face provides a challenge for both local and video based telepresence collaboration. Augmented reality technologies could also be integrated, but as current approaches have a low field of view not well suited to visualization of big terrain datasets and complex atmospheric data. Quantitative evaluation of the system could recruit from a larger non-expert user group, to answer generic usability questions.