1 Introduction

In manned spaceflight, astronauts have often encountered spatial disorientation and navigation problems, especially in spacecraft with complex three-dimensional architectures such as space station [1]. Gravity makes it effortless for us to maintain spatial orientation on the ground, but in space, astronauts must rely more on visual information to orient because proprioceptive cues like inner ear organ and muscles are not that liable in weightlessness [2]. Thus, Astronauts tend to orient by recognizing familiar objects though they floated into an arbitrary body orientation and had difficulty recognizing objects when viewed from different perspectives. It is reported that “visual reorientation illusions” and navigation problems occurred frequently [36]. Crew members could not define a typical reference frame because “floors” “ceilings” and “walls” were constantly changing. Furthermore, they have difficulty visualizing the spatial relationships between landmarks on the interiors of the two modules for the bound of walls. To establish a sense of direction, they would need a complex mental rotation, which made it more difficult to get a mental relationship between adjacent modules. Especially when they transferred to a novel module with a different local visual vertical, astronauts could lose their sense of direction instantaneously. Without the assistance of landmarks or waypoints, astronauts could not instinctively know which way to turn or find their way back. Spatial disorientation and navigation problems have a critical impact on crews’ work schedule and safety in spacecraft.

Compared with traditional spacecraft, space station expanded their scale greatly and had a more complex construction. Astronauts were no longer restricted in tight living spaces, their tasks frequently asked them to work in various orientations relative to the spacecraft interior and transit from one module to another, which required an integrated skill of spatial orientation, fast moving and judgement of the relationship between two modules. These features come up with new challenge on astronauts training. Traditional ground trainer mockups have some limitations in navigation training. Firstly, simulated modules are separate or not physically connected in the same way as they are in the actual vehicle, which makes it difficult to develop a “cognitive map” of the space station. Secondly, it is physically impossible to experience different body orientations and views in simulators.

It is proved that preflight training in virtual reality devices might reduce the incidence of spatial disorientation and help astronauts develop an integrated cognitive mental map of spacecraft so that their performance in orientation and navigation tasks is improved [79]. VR technology provides a vivid visual scene to train astronauts orient and navigate on visual cues. In a virtual weightless environment, astronauts experience observing modules in different perspectives and their navigation skills were studied to find an optimal training strategy.

2 VR Based Navigation Training System

To help astronauts have an adaptation practice and gain navigation skills when moving in space station, we employed virtual reality techniques to establish a navigation training system and conducted experiments on the system in order to obtain optimal training strategies. The VR based navigation training system consists of a simulated space station, human-computer interactive devices, computer simulation module and faculty processing module, as shown in Fig. 1.

Fig. 1.
figure 1

The architecture of VR based navigation training system

2.1 Simulated Space Station

As the foundation of spatial orientation and navigation training, a virtual scene of orientation practice for intra-space station activities was build. Trainees were immersed in the virtual environment and encouraged to look around and interact with panels and equipment. On the reference of MIR and International Space Station, a model of simulated space station was established, as shown in Fig. 2.

Fig. 2.
figure 2

Simulated space station used in the navigation training system. Green arrows represent visual verticals. (Color figure online)

The architecture of the simulated space station consisted of eleven modules in total, two core modules, five labs, two manned spacecraft and two nodes. Core modules, labs and manned spacecraft are large rectangular modules differed by their interior functions. Each node had up to six hatches interconnecting to adjacent modules.

Interior arrangement of a module defines a local visual vertical, but for some practical reasons, the visual verticals are not aligned. Landmarks and visual verticals in each module are particularly concerned. Most of the module and node interior surfaces were textured using photographs of actual ISS interior surfaces or their ground mockups. The hatches in each module too narrow and trainees must first pitch their head naturally when transferring a hatch.

2.2 HCI Devices

HCI devices are divided into two parts – input devices and output devices.

Movements or commands of a trainee’s viewpoint and functional instructions are main input information. A 3D-mouse is used to control the locomotion of the virtual viewpoint while its rotation is detected by a head-tracking system. The 3D-mouse (Space Mouse Wireless, 3Dconnexion) is six degree of freedom device which can simulate locomotion or rotation of an astronaut. The head-tracking system (FB-042, Flock of birds) is fixed on the head-mounted display, when trainees turn their head, views in the head-mounted display changes at the same time. Instruction controls are achieved by voice or keyboard.

Vision and hearing are main outputs. When training, trainees wear a high-resolution head-mounted display (self-designed) with a wide field of view, audio aids are provided through earphones as well.

2.3 Software Function

The basic functionality of computer simulation software is to drive and manage the virtual training scene above, create an actual environment with a high visual and physical quality in which trainees have a natural manipulation experience. The software is composed of interactive programs, collision detection programs, sound effects simulation programs, voice processing programs, virtual scene generation and management programs, etc.

Faculty software is a significant part of the navigation training platform. It puts forward training commands and deals with training data. Data processing part consists of data recording module and integrated evaluation module. The former collects key behavior and performance data while training including the position and orientation of the astronaut, time consuming, pointing errors, etc. The latter deals with the selected data and gives an integrated evaluation of the performance by a well-trained neural network.

When using the VR based navigation training system, faculty firstly initializes training task and trainees wear HCI devices to interact. The motion of trainees’ head and body, voice instruction are collected by client and then sent to simulation server through network. The server manages the virtual training scene, realizes collision detection and information processing, etc. The signal of virtual scene is sent to head-mounted display.

Figure 3 shows a training scene in simulated space station in the first view. Each tour starts from any point of the start cabin with a random orientation. Trainees must find the hatch first and design a route to the target cabin in mind. When transfer a node, they use their spatial knowledge or assistance to find the right direction. In training phase, a small virtual astronaut and a white arrow are provided. The astronaut represents the trainee’s orientation and the arrow represents local visual vertical. Trainees adjust their directions relative to the arrow (local visual vertical) and learn the structure of the module from different perspectives. Task description and performance are shown in the corner.

Fig. 3.
figure 3

A training scene in simulated space station in the first view. Virtual astronaut (not the actual size) and white arrow are assistance only in training phase

3 Experiment

Due to constraints on gravity, we obtain spatial knowledge in a single body orientation on earth and prefer to remember routes as a sequence of landmarks. But in space station, astronauts can float into any body orientation and the local visual verticals of adjacent modules are not always consistently aligned. Thus the “cognition map” seems more important. Both landmarks and sense of direction are significant in navigation, but is it better to navigate in a familiar view with variable body orientations or in a constant body orientation guided by a world reference frame? We conducted an experiment to find optimal navigation strategies for astronaut training.

3.1 Experiment Design

An experiment was performed (n=30) to investigate the effect of three training strategies on task performance in VR based navigation training system. All subjects passed the Cube Comparison Test and accomplished the experiment.

At the beginning of training phase, subjects have no cognition of the simulated space station. To get a better understanding of the entire space station, each subject was assigned to one of three groups balanced by individual abilities with different training strategies:

  1. (1)

    Constant group. Subjects were forced to maintain a constant orientation relative to the entire space station. The strategy resembles underwater creatures moving. When the target is above, trainees must just float up with no pitch. We locked two degree of freedom (roll and pitch) of the 3D-Mouse actually. Navigation aids above were provided.

  2. (2)

    Inconstant group. Subjects were allowed to maintain an orientation aligned with the visual vertical of the module stayed in. when they arrive at a novel cabin, they must adjust their body orientation aligned with local visual vertical, in other words, get the virtual astronaut and the white arrow aligned. Navigation aids above were provided.

  3. (3)

    Control group. Subjects were not provided any navigation aids and float in a random orientation.

In training phase, each subject has 18 routes to get familiar with interior layouts of the simulated space station. After that 12 sequential routes were conducted in test phase. Subjects found the hatch first and reached the target cabin as soon as possible. When arrived they were then instructed to turn around and point back to the start cabin (pointing backward task) as quickly and accurately as possible. All subjects were in good visibility in the first six tests while smoke was introduced in the following six tests. Time consuming, locomotion distance, pointing errors and turning errors were measured in each test. Spatial knowledge and the configuration of the simulated space station are investigated in the interview after all tests.

3.2 Experimental Results

Time consuming and pointing errors are key performance measures. Results are shown in Figs. 4 and 5.

Fig. 4.
figure 4

Time consuming for no smoke and smoke in simulated space station, grouped by training strategy. Error bars represent±1 SEM.

Fig. 5.
figure 5

Pointing errors for no somke and smoke in simulated space station, grouped by training strategy. Error bars represent±1 SEM.

Subjects of constant group have lower pointing errors and get a better performance with smoke while inconstant-trainees complete the task faster in good visibility conditions. Results of ANOVA show that main effect of visibility is significant [F (1, 27) = 4.3, p=0.043] on time consuming and training strategy [F (2, 27) = 5.8, p=0.005] has a significant effect on pointing errors. Strategy×visibility effect is not significant on performance indicators.

It is revealed that inconstant-trainees use the strategy we orientate on earth and get more familiar with landmark and route knowledge, constant training facilitates sense of direction and helps develop an integrated cognitive mental map. We draw the conclusion that both constant and inconstant strategies result in better performance compared with control group. The optimal training strategy we recommend is a mixed strategy. Astronauts should master a comprehensive study of spatial knowledge and orientation skills on visual information by using both two methods.

4 Conclusion

Due to complex structures of large spacecraft and human physiological changes in weightless environment, astronauts find it difficult to orient and navigate in space station, we applied virtual reality technology to preflight training, reducing spatial perceptual problems due to lack of various perspectives and views when trained in a single body orientation on earth. It is revealed that VR is feasible and effective for astronaut navigation training.

We designed and realized VR based navigation training system. Our present study and navigation training methods research based on the system draw the following conclusions: navigation training methods have a significant effect on task performance. Astronauts should master a comprehensive study of spatial knowledge and orientation skills on visual information by using both two methods.

An interesting discovery is that Cube test could be a good predictor of trainees’ performance. In our further study, individual ability, gender and human behavior are principally concerned.