Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction and Related Work

Many tasks such as surgery or flying require extensive training of motor skills as the margin of error must be extremely low. Virtual reality offers a cheap way to simulate a large variety of training environments. While the focus of virtual reality systems has been on the visual presentation, we present a low cost training environment that uses the Microsoft Kinect and the Oculus Rift to create a virtual reality in which the user’s real-world interactions are transferred into the virtual world. This allows the transfer of basic motor skills from the virtual world to the real world as we show in an example application that teaches users to juggle.

With recent developments in VR technology, such equipment will soon be available and affordable to the average user. Young and his team [YGAB14] compared a low-cost virtual reality system, similar to our set-up, with a high-cost virtual reality system. They showed in different experiments for three-dimensional coordination, that the low-cost system outperformed the high-cost system in nearly all cases. In a user study Santos et al. [SDP+09] compared a head-mounted display with a desktop based system for 3D navigation. They showed that users were more successful when using the desktop system, although the majority stated that the head-mounted display allowed a more natural and intuitive interaction.

Several projects exist that focus on teaching within a virtual simulation. For instance Webel et al. [WFKO13] presented a simulation, that uses the Kinect and Oculus Rift to create a virtual world, in which the user can visit historical places like the Siena Cathedral. The nDiVE project [RWG+13] offers different tools and training programs to teach the management of supply chains. They also showed the emotional effects of experiencing death in a virtual environment [RTC+14] compared to common desktop based systems. There are also projects that focus more on training a skill than transferring knowledge. Aggarwal et al. [AGE+06] presented a virtual reality training program, that enables novice surgeons to train their skills in laparoscopic surgery at different levels of difficulty. The simulation also observes users calculating a score representing the skill level.

2 Virtual Reality and Interaction

To teach users motor skills in a virtual environment, they need to be able to interact with it. For the training effect to be useful in the real-world, the interaction in the virtual world should be as close as possible to the real world. For the task of juggling, arm movement, as well as the opening and closing of hands, are key properties of the required interaction. Figure 1 shows the setup consisting of the Occulus Rift, a Microsoft Kinect and custom made gloves. The skeleton provided by the Kinect is transformed from the Kinect’s local coordinate system to the absolute coordinate system of Ogre3D, a library which we use as our graphics engine to manage and render the virtual environment. This skeleton is used to get relevant points of the user’s body, like hands and head for interaction but also to render a stick figure representation of the user in the virtual world (shown in Fig. 3). The environment is shown via the Oculus Rift from a virtual camera which is attached to the position of the user’s head in the virtual world. This camera is oriented according to how the user moves his head. This already allows the user to see his virtual arms as he lifts his real arms into his field of view. The hand positions provided by the Kinect are used to catch and throw virtual balls. However, an accurate representation of the hand requires a level of detail the Kinect does not offer [KE12]. Therefore the gloves, shown in Fig. 2, are used to detect the opening and closing of hands while at the same time providing basic haptic feedback. A ball is caught when it collides with the user’s virtual hand while the corresponding glove detects a closed hand. A ball is thrown from a virtual hand if the user opens his real hand. The setup does not allow for a detailed model of the hand, which makes it impossible to calculate an accurate trajectory for the thrown ball. Instead, we implemented a mechanism that automatically computes a perfect trajectory from the throwing hand to the other hand. This allows users to focus on the juggle motion sequence itself.

To exploit the potential of a virtual environment, we implemented several functionalities that would not be possible in the real world. In the virtual environment, it is possible to visualize trajectories of thrown balls and highlights that help the user time and direct the throws and catches. It is also possible to slow down time which allows users to juggle in slow motion and gives them more time to react while they are still familiarizing themselves with the juggling sequence.

Fig. 1.
figure 1figure 1

Setup for the experiment. The virtual training environment is displayed using an Oculus Rift (A). The user is tracked by a Kinect (B). Opening and closing of hands is detected by gloves (C).

Fig. 2.
figure 2figure 2

Gloves for hand interaction. An analog signal is triggered if the hand is closed as the wire-mesh (b) touches the aluminum ball (a). The signal is transported via wires (c) to a microcontroller (d) which transforms it to a digital signal and forwards it to the computer (e). In addition, the aluminum ball provides haptic feedback when catching a ball.

Fig. 3.
figure 3figure 3

The images on the left show the user, while the images on the right show what the user currently sees: A view into the virtual training environment. White lines are used to define the training area. An abstract representation of the user is rendered with blue lines. Hands are drawn using 3D models indicating if the hands are opened or closed. A text in front of the user shows the current assignment (Color figure online).

Fig. 4.
figure 4figure 4

The beginning of the cascade juggling patter. At the beginning hold two balls in the dominant hand and the third ball in the other hand (a). Throw one ball from the dominant hand (b). When this ball reaches it’s apex, throw the ball from the other hand (c). Catch the first ball and throw the last ball, when the second ball reaches it’s apex (d). To juggle, repeat step (c) and (d).

3 Experimental Evaluation

To evaluate our virtual reality system, we performed a user study with nine persons, three women and six men, aged between 20 and 43. None of which could juggle. The goal was to teach the cascade juggling pattern as shown in Fig. 4. Users had to take a virtual training course in our setup (see Fig. 1). The virtual training environment consisted of a simple room, delimited by white lines as shown in Fig. 3. The user’s avatar was positioned in the middle of the room. The progress and the current assignment were given in the form of text on the opposite wall. A computer generated voice led the user through every single exercise in the course. As a first task, users had to do simple ball throwing exercises to adapt to the virtual environment. In the following stages, they were presented with tasks of increasing difficulty leading to the actual juggling sequence with one, two and finally three balls. In the final stage, all three balls had to be juggled consecutively. Every stage had to be repeated several times until a certain success rate was reached. Subsequently, real juggle balls were given to the users to try what they had learned in the virtual training for ten minutes. During that time, the number of consecutive throws and catches were counted. Finally, users were asked to fill out a questionnaire to get an insight into qualitative aspects of the virtual environment and the training course.

Table 1. Number of subjects able the perform n consecutive throws/catches.

3.1 Results

It took 27 min to complete the virtual course on average. All users easily familiarized with the virtual environment. While the trajectory of the balls was perceived as realistic, the process of throwing and catching was not as convincing. The results for the number of consecutive throws and catches with real balls are shown in Table 1. We defined being able to juggle as throwing and catching five consecutive balls. After the training, three subjects were able to juggle despite the fact that the process of throwing was significantly simplified. In addition, all users stated that they knew how to juggle but needed more time to practice with real balls.

4 Conclusion

We presented an interactive training environment, which is able to teach motor abilities, while observing the learning progress by combining a virtual environment with the real world movements of the user. In a user study, we showed that motor skills can be conveyed from a virtual environment to the real world even if certain aspects are simplified. With these promising results, we hope to address more complex tasks in virtual courses for inaccessible or dangerous domains in the future.