1 Introduction

Scientific inquiry as a teaching concept is ideally suited to encourage students to engage with topics from science, technology, engineering, and mathematics (STEM), as it is both inductive, i.e., the students learn by discovering rules through observation, as well as problem-based, and therefore gives the students a real-world application for their prior knowledge from class and their newly acquired knowledge from the lesson (Constantinou et al. 2018). This active learning has resulted in increased student performance in STEM fields as shown in the meta-analysis by Freeman et al. (2014) and is enhanced by collaborative learning, where students work in small groups in which they are to discuss and help each other out with their tasks (Akçayır and Akçayır 2018).

In the twentieth and twenty-first centuries, advancements in digital media have resulted in new possibilities for teaching in schools and can provide an enriched learning experience by adding different resources e.g., additional images or even videos through a digital layer via augmented reality (AR) (Stintzing et al. 2020). Using AR and virtual reality (VR) to visualize the 3D features of a landscape can provide different and more realistic ways to analyze the chosen landscape (Prisille and Ellerbrake 2020).

Meta-studies by Akçayır and Akçayır (2017) and Merchant et al. (2014) have presented that 3D images, games and VR can help students to better understand and visualize the respective topic and content (Stojšic et al. 2016). Extra-terrestrial worlds and space exploration in general have always fascinated people of all ages and are ideally suited to spark the students’ interest to get involved in STEM topics. For this reason, two previously created worksheets comparing mountains on Earth with their counterparts in the inner solar system were selected to be enhanced by digital 3D models. The first one uses Mount Everest on Earth versus Mons Huygens on Luna to teach about scales, a common topic in Maths; the other uses Maunakea on Earth versus Olympus Mons on Mars to teach about shield volcanoes, an important topic in the geography curricula of Germany (e.g., Senatsverwaltung für Bildung, Jugend und Familie 2017; Ministerium für Schule und Bildung des Landes Nordrhein-Westfalen 2019a, b; Staatsinstitut für Schulqualität und Bildungsforschung München 2021). The AR app is supposed to help the students visualize these mountains, their distinctive features, and scales. A separate VR model of Mars has been created, which can be used to put the Olympus Mons into context. The VR app teaches about the use of elevation and roughness models in order to select suitable Mars Rover landing sites depending on surface conditions.

The three aspects of the term complexity in this paper refer to (1) the application as such, varying with the degree of virtual content integration, as well as (2) the underlying geospatial information adapted to the needs of AR and VR, respectively, and (3) the virtual environment created from geospatial data (Fairbairn 2006; Castner and Eastman 1985; Robinson 1952). Since most geospatial datasets are too large or restricted for end-user equipment, downscaling is necessary, while still preserving most details with the highest possible accuracy regarding location, resolution, and relation between features. The trade-off between level of detail (LOD) and memory storage leads to different approaches to integration of elevation models in AR and VR. Not all components in the virtual environment have the same importance and thus far do not need the highest possible texture resolution or detail level. A good balance between time requirement and performance is desirable (Dickmann and Dunker 2014). In this study, the workflows we adopted for our approach of combining elevation data and textures are shown for AR and VR, respectively, to highlight the differences in creating a stable, reusable platform that integrates real-world remote sensing data and interactive content for unity (AR) and unreal engine (VR). We strived to explore how different levels of virtual complexity can help to overcome accessibility issues in order to make virtual environments socially inclusive. As the existing models of Earth, Moon, and Mars like Mars Trek were not customizable and thus not suitable to be used with our teaching material, we developed virtual landscapes as a platform for current and future school applications. Different LODs and data conversion according to the level of interactivity and end-user device allow for tailored applications for teaching material.

The AR and VR apps are part of the KEPLER ISS project that use astronauts’ views, including earth observation sensors aboard the International Space Station ISS, as well as remote sensing data of Earth and other planets to engage students with subjects from STEM using interdisciplinary, interactive and intermedia teaching materials (Ortwein et al. 2017) as well as digital experiments (Lindner et al. 2019; Rienow et al. 2020). The AR app of this study is integrated into the main app of the project, “Columbus Eye” (Lindner et al. 2020, https://play.google.com/store/apps/details?id=com.ColumbusEye.Main, Part “Mountains in the Solar System”), named after the predecessor of the KEPLER ISS project. The VR app is yet to be published.

1.1 AR and VR in Education

Hearing about AR and VR technology is often associated with entertainment, i.e., gaming like Pokémon Go or Beat Saber, Superhot VR, etc. However, a study conducted by Greenlight VR in 2016 reveals that VR is more than an extension of gaming and that US customers value tourism/adventure (73.5%) and education (63.9%) higher than gaming (61%) (Rex 2016).

Nevertheless, using AR on their own smartphones comes close to gaming for some students. It has been used successfully in all age groups from kindergarten to university students (Akçayır and Akçayır 2017) and is currently used in professional settings for teaching and learning (Kljun et al. 2020).

1.1.1 Benefits of AR and VR for School Education

Several meta-studies have been performed in recent years, overall showing long lists of benefits for students in using AR and VR (cf. Akçayır and Akçayır 2017; Wolf et al. 2020; Jensen and Konradsen 2018; Ott and Freina 2015; Cöltekin et al. 2019; Radianti et al. 2020; Hamilton et al. 2021). Studies show enhanced learning performance when mere passive intake of information is replaced by active experience and learning by doing (Clark and Mayer 2008; Reeves 2012). Educational applications should combine content and technical capabilities so that the resulting active learning environment facilitates grasping virtual experiments and a sense of place. Ideally, an immersive, easy-to-use and intuitive application is not separated from its context, both in terms of location and leading questions, and storytelling. Exploration of virtual content is a powerful means to gain insights into complex problems and concepts of causal relation. Thus, AR and VR would be ideal for distance learning if the equipment were available to all students at home as well as in class (Reiners et al. 2014; Turner and Turner 2006; Knierim et al. 2020; Edler et al. 20192020; Babich 2019). Furthermore, accessibility is crucial when it comes to providing all students equally with the benefits of new technology. Depending on the students’ socio-economic background, not all students can afford their own device. Therefore, the concept of “bring your own device” (BYOD) is only suitable if the students’ equipment is suitable and evenly distributed in class. Nevertheless, BYOD has benefits regarding the ability to learn unconstrained by location, the other student’s pace, and time (Akçayır and Akçayır 2017).

While it can be assumed that a reasonable number of suitable smartphones are brought to class, head-mounted displays are not available to every student, explaining the variety of different applications covering the full range of the reality–virtuality continuum (Milgram and Kishino 1994; Ortwein et al. 2017; Reiners et al. 2014). Even students who do not have a recent smartphone or VR headsets can participate by cooperating with their peers who have the necessary devices, furthering collaborative learning and contributing to a social learning experience (Mozelius et al. 2020).

1.1.2 Extra-terrestrial Field Trips

Reading and learning about a specific topic is often accompanied by the wish to “go there” and experience it first hand; who did not want to be an astronaut as a child? With VR, students are able to immerse themselves in a subject by “being” there, i.e., visit the International Space Station (ISS) or go on a virtual field trip through the jungle.

Several applications exist already to experience a virtual field trip or even go to space. While educational resources have integrated real-world elevation models successfully into a game engine workflow (Rienow et al. 2020; Keil et al. 2021), finding geodata in games is not as common as sculptured procedural, artificial landscapes, except for strategy games like Cities: Skylines, that are capable of integrating topographic data (Olszewski et al. 2020). Google Expeditions provided a variety of field trips, which can be accessed by smartphone. These trips use VR panorama shots to let the user experience their virtual surroundings. This project has been very successful with more than 1 million students from 11 countries (Babich 2019). Another highly interactive application due to incorporating touch controllers of the Oculus Rift, is Mission:ISS from Magnopus (Magnopus 2017). Here, the users learn how to navigate in microgravity aboard the ISS and perform different activities like taking a virtual EVA (extravehicular activity, “spacewalk”) or docking a spaceship. These activities are supplemented by informational video clips starring NASA astronauts.

Mars or Luna are popular main themes, e.g., NASA and Google collaborated to create “Access Mars”, where the user can follow the tracks of NASA’s Curiosity Rover and get background information and further explanations by NASA scientists (Google 2020; NASA 2020). The application runs on the user’s smartphone without additional (expensive) hardware, thus making it more accessible than Mission:ISS. Other AR applications, Missions to Mars or Apollo’s Moon Shot, provide gamified takes on the Mars and lunar missions, e.g., allowing the public to drive NASA’s Mars rovers on Earth (Reichhardt 2021) or, in the case of VR Mission:Mars (SpaceRobotics.EU 2020), explore the rovers in their “natural habitat” on Mars. While there are all-round commercial service providers like ClassVR, CoSpacesEdu and zSpace providing a multitude of services to add AR and VR content and hardware to the classroom, these services often require licensing and are cost-intensive in most cases (Avantis Systems 2020; CoSpaces 2021; zSpace 2021).

All in all, the development of AR and VR application for serious games is industry driven for the purpose of entertainment, and less research driven with purposes in education or even data visualization. However, these applications are easy to explore, while complex space exploration like the virtual universe of DLR Cosmoscout, although scientifically correct, excludes the less computer-literate users (Schneegans and Flatken 2019). Therefore, it was our goal to develop reliable and customizable platforms for AR and VR applications to easily integrate interactive, school-related applications that are free of charge and easy-to-use on multiple devices, thus enabling students of all backgrounds to experience virtual extra-terrestrial field trips to Luna and Mars (Table 1).

Table 1 Overview of the utilized data sets from Luna and Mars

2 Extra-terrestrial Data and Model Extent

2.1 Mars

The Mars Viking mosaic was the image data with the highest resolution over the entire modeled area available. As Olympus Mons, for example, is about 600 km in diameter, none of the sensors orbiting the planet has acquired an image of the entire mountain with a comparable resolution or better lighting conditions, only small individual stripes with different lighting conditions are available with a higher resolution or the entire mountain with a much lower resolution and/or low sun angle. The colourized global mosaic used in this study has a resolution of 256 pixels/°, with around 232 m per pixel at the equator taken by NASA's Viking Mission to Mars Orbiters. The mosaic has been orthorectified based on the Mars Orbiter Laser Altimeter (MOLA) elevation data and are displayed in a generalized simple cylindrical projection (i.e., equirectangular) (Williams 2018; USGS 2009).

The section of Olympus Mons chosen for the AR model is 679 by 645 km and partially includes the foot of the mountain (Fig. 1). In the 232 m resolution of the Viking mosaic, this results in an image of 2927 by 2782 px. The lowest point is below the “sea level”, i.e., the reference areoid (Lemoine et al. 2001). While the real water level could have once covered the northern and western flank of Olympus Mons, the visible ancient coastline is several hundred kilometers away (Perron et al. 2007) and, therefore, cannot be included in the AR model at a reasonable size.

Fig. 1
figure 1

a Overview of Mars topography and chosen sites for 3D modeling. The VR app focuses on the landing sites of several Mars probes, while the AR focuses on the highest mountain in the solar system, Olympus Mons. b The elevation of the Olympus Mons area is stretched to uint16 values (see Ch. 3.1) for the 3D model in the AR app and receives its texture from the c Mars Viking Global Mosaic of the same area. The mountain area itself is so large that images from different times of day and angles were mosaicked, with overlap lines clearly visible in the image.

The Mars Viking Mosaic is supplemented with a DEM based on Mars MGS MOLA—MEX HRSC Blended DEM Global 200 m v2, a blended image product that combines data from NASA’s MOLA and ESA’s High-Resolution Stereo Camera (HRSC). The original elevation uncertainty is ~  ± 3 m (MOLA) and 10 m (HRSC) with global topographic coverage and a spatial resolution of 463 m (MOLA) and 50 m (HRSC), respectively (Fergason et al. 2018). The blended image has a cell size of 200 m/px after applying bilinear interpolation to up-sample the MOLA and down-sample the HRSC DEMs, with a total elevation uncertainty of ± 3 m resulting from global (± 1.8 m) and regional shape areoid errors (Fergason et al. 2018, 2017; Laura and Fergason 2016).

The elevation range in the modeled Olympus Mons area ranges from − 2715 m in an area resembling an ancient stream bed northwest of the mountain to its peak at 21,229 m, which is technically the rim of an impact crater near the mountain’s volcanic craters at the top. The VR environment focuses on the immediate surroundings of the recent Mars Rovers’ landing sites, as well as a full extent landscape model (Fig. 1). While the full resolution of MOLA can be used for the landing site models, the landscape model resolution is scaled to 2625 m/px.

2.2 Luna

Just like Maunakea is only considered Earth’s highest mountain when measured from its foot under the sea level to its peak, Luna’s highest mountain is only the highest mountain rising above a plain, not the highest point on the surface. It stands about 5500 m above the Mare Imbrium, of which’s ring mountains Mons Huygens is a part, but only 3281 m above lunar “sea level”, the surface of the reference selenoid (Lemoine et al. 2013). The model area, thus, includes the entire mountain and a small part of the Mare Imbrium in Huxley Crater, covering an area of about 27,380 by 27,380 m, or 1369 by 1369 px at 20 m resolution. It spans from − 2004 m in a small impact crater within Huxley crater to 3281 m at the mountain’s peak. Although data with a higher resolution are available from the Chinese Chang’E missions, the file size would have been too large for a mobile application. The Chang’E-2 global digital elevation model (DEM) and global digital orthophoto model (DOM) with 20 m resolution used for the applications described in this paper were produced using the stereo pairs acquired by charge-coupled device (CCD) stereo cameras. The 20 m DEM and DOM were divided into 188 map subdivisions, which have different sizes and projections based on latitude and longitude (NAOC 2021; Zou et al. 2014). Tile F010 uses a Normal Mercator projection at 18° width and 14° height with the central Meridian at the mean value of the longitude range (Zou et al. 2014). In the grid system, F010 is the sixth scene counted from the North Pole and the 10th scene counted from − 180°E (Fig. 2).

Fig. 2
figure 2

a Moon DOM and DEM data with a resolution of 20 m is offered for the entire moon from Chang’E 2 data. b The tile containing Mons Huygens, F010, was subset c to an area of 27 by 27 km

3 Using Extra-terrestrial DEMs in Game Engines

To import extra-terrestrial DEMs into Blender and Unreal, several restrictions and typical game environments have to be taken into consideration. While for Earth data, there are tools like BlenderGIS to import 3D models from Google Earth, Mars and Lunar data (as well as Earth data with bathymetry) have to be processed specifically. Unreal Engine has been our engine of choice for VR environments due to its integrated Quixel Megascans Library (Epic Games 2019; Azim 2018), while our AR applications are developed in Unity with the Vuforia extension, which brings to AR to all recent mobile devices with a camera.

For Unreal Engine, terrain information can either be imported as a Static Mesh or as a Landscape. A Static Mesh is a piece of geometry that consists of a set of polygons. In contrast to Landscapes, Static Meshes are cached in the video card memory, and thus can be translated, rotated, and scaled, but their vertices cannot be animated. Static Meshes are used to create world geometry for levels to be used in Unreal Engine. These 3D models can be loaded or created in external modelling applications like Blender and then be imported into Unreal Engine. Exemplary uses of Static Meshes are boxes, walls, doors, foliage, and other terrain decorations.

To create large terrains, Landscapes are a better choice than Static Meshes in Unreal Engine. Landscapes use 4 bytes per vertex for the vertex data; Static Meshes store position as a 12-byte vector, and tangent X and Z vectors packed into 4 bytes each, and either 16-bit or 32-bit float UVs (i.e., texture coordinates) for a total of either 24 or 28 bytes per vertex. This means that Static Meshes will use 6 or 7 times the memory of Landscapes for the same vertex density (Epic Games 2021). Landscapes also store their data as textures, and can unload unused LODs for distant areas and reload them from disk as the viewer approaches them. Landscapes use a regular heightfield, so the collision data can also be stored more efficiently than the collision data for Static Meshes (Epic Games 2021). Landscapes allow the use of heightmaps up to a resolution of 8192 by 8192 px and this is how the vast (slightly modified) MOLA DEM was imported into Unreal Engine (Fig. 3).

Fig. 3
figure 3

Workflow for the conversion of DEM and surface reflectance data into 3D models for AR and VR. Blue: pre-processing, green: texturing and material definition, yellow: lighting and environment settings

Since Unity does not accept individual models with more than 216 vertices (Unity 2019), equating to less than 22,000 faces (triangles) in the model, the Olympus Mons model’s vertex count had to be reduced to below that number. Models with multiple parts can be used, but would increase file size too much, thus only single-part models are used in this AR application and the extent on the map kept close to the respective mountain areas.

3.1 Image Pre-processing

With the aforementioned restrictions in mind, the first step is to stretch the DEM’s elevation difference to uint16 (see Fig. 1). Uint16 stands for unsigned (no negative values) integer (natural or whole number) between 0 and 216–1 (values from 0 to 65,535), losing the unit of measurement. The image needs to be saved as.tif in, e.g., Adobe Photoshop, removing the geotiff tags. The elevation data is still saved in greyscale, although relative to uint16 values. These images are imported into Blender through a modifier (Figs. 1b, 3). In Blender, a simple plane with 4 corner vertices is subdivided using the “Subdivision Surface” tool, so it can be bent into a shape. Each 3D object consists of a multitude of 2D triangles, also called faces, each of which is defined by the position of its 3 corners, the vertices. The more vertices and, therefore, triangles a 3D model contains, the more detailed its surface structure appears, but the larger the files become. The subdivision settings of the “simple” option have to be carefully balanced, as all models have small hills and rough terrain, so subdividing the model is necessary to make it look sculptural, but this also leads to a high number of vertices. The “Displacement” modifier uses the greyscale.tif images from the DEMs (Fig. 1b) to displace the newly created vertices to their new positions on the Z axis. Since negative values were eliminated earlier in the stretch to uint16, the “midlevel” now has to be defined as a fraction of 1 from the total elevation range and the model’s Z scale adjusted manually to fit the elevation data. As the model will have several hundred thousand triangles by now, the “Decimate” tool can be used to collapse them. The ratio of the Olympus Mons model is set to 0.05 to achieve the supported number of vertices necessary for implementation in Unity, but will still look sculptural in areas with small-scale terrain. Mons Huygens’ elevation model’s 20 m resolution fits perfectly regarding the targeted model size. Textures and Materials are not applied in this step, as they are considered separate entities in Unity and Unreal Engine (Fig. 3), but their application will provide an enhanced impression of three-dimensionality. After applying the Modifiers, the models are exported as Static Meshes in Filmbox (.fbx), the game Engines’ preferred model format (Unity 2019, Epic Games 2021). Olympus Mons’ texture had to be resized to 1200 by 1141 px for AR due to the file size, reducing its resolution from 232 to 566 m.

To visualize high-definition close-ups of the terrain in VR that can be explored in great detail, we opted to import these scenes as Static Meshes. As stated before, these can provide a much higher detail than Landscapes, so the viewer can experience the overall surrounding topography of Mars through the Landscape and explore the areas of interest via highly detailed Static Meshes that are overlaid on top of a surrounding Landscape. Since there is no technical limitation to the vertex count and, therefore, the amount of detail of a Static Mesh, it is important to balance the level of detail to achieve the desired performance on the target system. Several LOD settings for the Static Mesh have to be generated, so that the model will look detailed in close-ups, and decrease in quality the further the viewer moves away from it, to increase performance. Furthermore, it is advised to use square textures by the power of two (64 by 64 px, 128 by 128 px, 256 by 256 px, …) to waste less memory. A good amount of quality also comes from the way the Material for the Static Mesh in Unreal Engine is set up; this will be detailed in the following chapter.

While it is not necessary to resize the cropped images for use as Static Meshes, a VR landscape is restricted in size. Since the source data is larger than the currently supported extent of a landscape in Unreal Engine (53,347 by 106,694 px, 200 m resolution, simple cylindrical map projection) versus the maximum allowed size of 8129 by 8129 px, the data had to be resized for use as a landscape model. The pixel resolution was lowered to 500 m and the total size was reduced accordingly. As detailed in the Unreal Engine documentation, a Z scale value of 1 results in a maximum height of roughly 256 cm and a maximum depth of − 256 cm stored with 16-bit precision. To calculate the custom height required by the data, a ratio of 1:512 is applied.

To further enhance the visual fidelity of the data for use in Unreal Engine, a Selective Gaussian Blur filter was applied to eliminate terrain terracing (Fig. 4). The filter is applied only if the difference between the value of the target pixel and the value of the surrounding pixels is less than a pre-defined delta value. This means that most slopes are preserved, because the difference in pixel values from a sloped to a flat terrain is high, but small bumps in the terrain are still removed to a certain degree.

Fig. 4
figure 4

Left 8-bit Terrain Terracing, right 16-bit smoothed terrain (selective Gaussian blur filter)

The pre-processed images can now be used as close-to-reality elevation representations in both game engines. Materials are applied similarly in Unreal Engine and Unity to control the visual look of objects and their surroundings. Materials are assets that can be applied to a Landscape, Static Mesh or 3D model, respectively, and control the type of color, the type of surface, the translucency and many more properties of the object (actor) it is applied to (Epic Games 2021).

3.2 Textures and Materials

After importing the models into Unity, separate materials are created for each mountain using standard shaders (Unity 2019) and the surface reflectance images as Albedo (Figs. 1b, c, 3), which have the exact same extent as the models. No normal maps are used to save memory. The materials are placed over Image Targets (Rienow et al. 2020) the size of A4 sheets. The square extent of Mons Huygens fits well over the A4 sheet, being a bit wider on the long and a bit shorter on the short side. Olympus Mons’s model is more than two A4 sheets large and completely covers the image target (Fig. 9).

To achieve a more realistic Mars-like look in Unreal Engine, images from the Perseverance Mastcam-Z camera (NASA 2021a) have been merged with canyon Surface Materials from the Quixel Megascans library (Azim 2018), matching the corresponding Earth surfaces to Mars’ landscape. Since Unreal Engine’s Materials support several inputs to further increase the quality, metal-, roughness- and normal maps were acquired in addition to the albedo texture. In contrast to this, no normal maps were used in Unity to save memory on the mobile devices where it can be a limiting factor. To apply the downloaded assets and paint the Landscape, a specifically designed Landscape Material was created to suit the Mars surface. This Material applies different textures to the Landscape depending on the underlying terrain geometry. More level surfaces feature a “normal” ground texture, while sloped terrain will display a more rocky and jagged texture (Fig. 5).

Fig. 5
figure 5

Landscape Material with texturing depending on the level geometry. In this example, the flatter terrain has sandy coverage, while the sloped terrain will display bare rocks

To reduce sharp transitions between different types of textures and, therefore, result in a more realistic look, a distance blend, that uses the player's distance to measure the amount of interpolation between both textures, was implemented (Fig. 3). This distance variation has also been added to the normal maps. To eliminate texture repetition due to tiling, the Texture Bombing feature has been added. This feature uses a tiling texture input and blends multiple samples with offset coordinates (flipping and rotating) and blends to neighboring samples to reduce tiling (Fig. 6).

Fig. 6
figure 6

Left Texture Bombing disabled, right Texture Bombing enabled. Observe how the regular patterns disappear with Texture Bombing enabled

Triplanar mapping was used to eliminate unnatural stretched textures and hard seams. This works by mapping the texture three times in three different directions (along the X, Y, and Z axes) and blend them together. If a pixel in the geometry (Static Mesh or Landscape) faces 50% in the X-axis direction and 50% in the Y-axis direction, it will have 50% of the X-axis rendering and 50% of the Y-axis rendering applied to it. It is important to note that using triplanar mapping takes a bit of performance, because each texture is used 3 times. This can quickly add up if multiple maps (specular, roughness, normal) are used (Fig. 7).

Fig. 7
figure 7

The left image has triplanar mapping disabled, the right image enabled

3.3 Light in Darkness

In the AR app, a directional light was added for better depth perception, but the effect is small as the combination of smooth relief and very high sun angle at Mons Huygens on the light to dark gray surface leaves no shadows, and the mosaic on Mars was combined from images with varying sun angles (Fig. 1c).

In Unreal Engine, a more complex lighting solution with moveable light sources was added, to make use of Unreal Engine’s physically based rendering (PBR) system (Epic Games 2021). This means that the way images are rendered is modeled after the light flux in the real world. The result is more accurate and typically more natural-looking. Physically based Materials will work equally well in all lighting environments. For the objects with Materials applied to them, the Material is used to calculate how light interacts with the surface of that object. These calculations are done using incoming data that is input to the Material from a variety of images (textures) and maths expressions, as well as from various property settings inherent to the Material itself.

4 Interactive Elements and Educational Context

To interact with the applications, several input methods/devices are supported: AR supports touch screen usage and responds to mobile device movement, VR requires a powerful desktop PC and supports input either from keyboard and mouse or the Xbox gamepad or the HTC Vive motion controllers. As output devices, a normal computer screen or the HTC Vive head-mounted display (HMD) are supported. Both AR and VR applications have user interface (UI) elements as annotations of in-game elements. Only the AR uses a head-up display (HUD) to turn the annotation UI on and off, turn the “sea level” of the areoid and selenoid on and off, and switch between Earth and Mars/Luna. VR UI elements include an info layer with background information on the Mars rovers’ landing sites, which are triggered by the user approaching the sites; the user is not distracted, as no classic-style HUD UI is needed to interact with the built-in functions. Both apps come with work sheets and detailed instructions as well as tasks for the students to solve and sample solutions for the teachers. The teacher material includes user guides to help deal with the often-unfamiliar technology, especially the orientation and navigation in the VR environment.

Navigating in a virtual environment is difficult especially for new users and often leads to motion sickness. Smooth headtracking and a stable framerate reduce this effect. To increase the sensation of moving in a real environment, a collision map is combined with a Static Mesh surrounding the player, enabling smooth transformation between levels of detail, e.g., overall Landscape and the landing sites’ Static Meshes. To increase the immersion, several features were added. These include a day–night cycle that uses a movable light source and moving cloud textures that were taken from the Curiosity pictures (NASA 2021b).

The Sky Atmosphere component in Unreal Engine is a physically based sky and atmosphere rendering technique. It is flexible enough to create an Earth-like atmosphere with time-of-day featuring sunrise and sunset, or to create extra-terrestrial atmospheres of an exotic nature: the Sky Atmosphere gives an approximation of light scattering through a planetary atmosphere's participating media, giving outdoor levels a more realistic look by using directional lights representing the sun disk, affected by the color of sunlight and atmosphere. The sky color varies, depending on the altitude of the sun, or in other terms, how close the dominant directional light's vector gets to being parallel with the ground. Scattering and fuzzy settings allow for full control of the atmospheric density. For the aerial perspective, the curvature of the world is simulated accordingly when transitioning from ground to sky to space views (Epic Games 2021). The Sky Atmosphere was adjusted to represent a Martian atmospheric color ramp and cloud cover.

Thus, the user can experience a 3D version of a map and DEM, enabling students to relate 2D representations of Mars’ surface to the virtual 3D environment. Additionally, the user can switch between surface material overlays using keypad functions, seeing either slope data, a terrain roughness index, or imagery. This closes the perceived gap between raw data, data manipulation, and cartography, and helps to relate elevation data and scale to classified representations.

In the AR, the highest and lowest point of each mountain are annotated in the models to give users a reference scale (Table 2).

Table 2 Annotations for the 3D models in the AR environment

However, for students, this may not be enough to understand and get a feeling for the dimensions; hence, scale bars are implemented using simple boxes with striped textures. For Mons Huygens, which has a relatively high relief compared to the width of the model, the 1 by 1 by 10 km scale bar sits upright and encompasses the entire elevation in the image (Fig. 8). For Olympus Mons, with more than 600 km width but less than 30 km elevation difference, the 10 by 10 by 100 km scale bar lies down (Fig. 9). Since both lowest points are “below sea level”, an imaginary sea level is implemented as a simple blue textured plane. Sea levels on both planets are defined through their geoids (Lemoine et al. 2001, 2013), but while Luna never had a sea level, Mars’ ancient sea levels are clearly visible in Mars elevation data (Perron et al. 2007) and are far below the geoid sea level. Both extra-terrestrial environments are juxtaposed with Earth environments, processed and visualized in the same way: Mount Everest on Earth uses the same image target, scale, and “sea level” as Mons Huygens on Luna, and the same is true for the pair of Maunakea on Earth and Olympus Mons on Mars. This promotes experiencing the scale of mountains of the different planets and comparing their nature and sizes.

Fig. 8
figure 8

Annotated, textured 3D model of Mons Huygens on Luna with the imaginary sea level/selenoid and HUD, empty A4 sheet for scale

Fig. 9
figure 9

Annotated, textured 3D model of Olympus Mons on Mars with the HUD, empty A4 sheet for scale

The AR app developed in this study works on all Android mobile devices with a back camera and any version of Android including and above 6.0 Marshmallow, resulting in more than 10,000 device models being able to run it, including many technically outdated and easily affordable devices.

However, depending on socio-economic background, not all students have access to smartphones, but classroom activities often include work in small groups in which several students can share one smartphone on which the respective app is running.

Both apps are being tested in a regular class and the results will be published at a later date.

5 Conclusion

The goal of this study was to demonstrate different levels of complexity for integrating real-world scenarios based on scientific data into digital learning environments using Unreal Engine and Unity. With the applications at hand, students can experience scale in a virtual environment that translates remotely sensed data into 3D models as close to the original resolution of the scientific dataset without losing performance. By adopting a similar workflow to the Unreal and Unity workflows presented in the method section of this paper, elevation models can be made tangible in AR and VR. The integration of real-world data allows teachers to take the students on a realistic and informative field trip to real-world places that could not be visited otherwise in the school context. Students are encouraged to think about the relation of maps to real environments, and to relate extra-terrestrial sizes and distances to environments on Earth.

Three major conclusions were deduced from integrating these applications in two different technical environments for AR and VR. (1) The complexity of the application must vary with the degree of virtual content integration from the small, simple models in the AR to the relatively simple Mars full-landscape model to the multi-scale models of the individual landing sites. (2) In accordance with this, the underlying geospatial information has to be pre-processed in respective levels of complexity. Finding a balance between graphic complexity while maintaining data integrity is crucial. (3) The purpose of the content dictates the elements that have to be integrated into the virtual environment: While the AR requires a UI, a UI in the VR could break immersion. This immersion is achieved by integrating complex surroundings, i.e., skybox and lighting, which are not necessary in the AR.

When it comes to making new technologies and scientific contents available to students, inclusion is a topic that has to be addressed thoughtfully. Participation in the experience of virtual and augmented reality should, therefore, not depend on individual equipment and access to costly content. While high-end PCs are needed to experience VR, the AR apps can be used with almost all smartphones. This mix of potential devices ensures that the majority of the target audience can access the content using their own or the school’s equipment, while a range of input controllers allow access for the physically handicapped. With the school acting as a distributor, the material is available to all students regardless of income or social status.