1 Introduction

As pioneer of Motion Sensing technologies, “motion capture” has originated from Max Fleischer’s Rotoscope in 1915, and up till now, has been successfully applied in many fields such as virtual reality technologies, gaming platforms, research on ergonomics, simulations and biomechanical research [1]. Motion Sensing technologies may not only capture skeletal movements of human body on a real-time basis, but may also interact with surrounding equipment or environment. Involving several fields like games, medicine, commerce and smart home, they have huge space for development and imagination. In April 2013, the Perfect World released Motion Sensing videos of “Swordsman” produced by SharpNow, which launched the prototype of 3D sensing technologies based on gesture recognition technologies.

Nowadays, Motion Sensing technologies are fairly widely used in design. For instance, adequately investigated in the game industry, Kinect also takes the lead in other fields, including clinical medicine, remote operation, medical education and medical data survey. In the field of commerce, AR Door, a technology company in Russia, utilized the “virtual fitting mirror” invented with Kinect Motion Sensing peripherals. Then, shoppers could stand in front of virtual fitting mirrors and their 3D images of trying on new clothes could be automatically displayed. Concerning applied research on technological aids for the handicapped, Michael Zollner of the University of Konstanz developed the Navigational Aids for the Visually Impaired (NAVI) to help with the navigation of the blind. In addition, Motion Sensing technologies are used for robots and smart homes and so on [4].

In this paper, Unity3D is integrated with Kinect Motion Sensing technologies, and a virtual interactive Motion Sensing design based on Unity3D engine is completed for swimming by Kinect optical sensing.

2 Purposes

With the development of intelligent computer technologies, virtual interaction technology that can pass interactive equipment and software of Motion Sensing interactive system based on 3D digital content have emerged. Today, Motion Sensing technology isn’t only an advanced 3D digital interactive multimedia technology, but also a brand new research interest in the field of human-computer interaction. Their application has been popular in fields of medicine, education, rehabilitation, e-commerce, competitive sports, animation and game production. With pretty broad prospect of application, this technology can realize new-generation human-computer interaction, 3D human modeling and skeleton tracking in terms of motion, gesture and language recognition. Its application is widespread from deep data to robot vision and control. In the future, it will be used more potentially, and there will be quite great market demands for designed virtual interactive Motion Sensing products.

3 Methods

  1. (1)

    Literature Consultation

    Foreign and domestic literature about means for exploring Motion Sensing technologies, 3D human models and construction of virtual reality environment is consulted, to determine content and methods for studying Motion Sensing technologies.

  2. (2)

    Interview

    By carrying out research about Chinese sports information technology and sports training and conducting structured and unstructured interviews of experts, opinions on research feasibility and effectiveness are summed up, to lay a foundation for surveys and empirical research.

  3. (3)

    Experiment

    Virtual interactive models are built for swimming with computer numerical simulation technology.

  4. (4)

    Mathematical analysis

    A data analysis is performed on research content of Motion Sensing technologies, including deep data flow and skeleton tracking.

4 Results

4.1 Motion Sensing Technologies

As current brand new and advanced technologies of 3D digital interactive multimedia, Motion Sensing technologies have a broad prospect of development and tremendous space for applied research, contributing to more studies and applications in the field of sports. Represented by Motion Sensing technologies, Kinect is a 3D Motion Sensing camera, as shown in Fig. 1 as follows. The emergence of Kinect hardware shall be attributed to combined use of technologies related to multiple aspects such as sound, light, electricity and machinery [3]. With a function for immediate dynamic capture, it may capture motions of human limbs to autonomously recognize, memorize, analyze and handle these motions. With this technology, people may have different movements and interact with each other in virtual scenes, but also share their pictures and information, etc through the internet, to make it possible that “the body is a controller”.

Fig. 1.
figure 1

Kinect for Xbox one Windows

As regards Kinect, key technologies are mainly characterized by skeleton tracking, motion recognition, facial recognition and speech recognition. Kinect acquires depth images and recognizes human skeletons by an infrared camera, to separate human body from background. It often recognizes skeletons by looking for the Chinese character “大”. In the mean time, the skeleton acquisition is somewhat interfered by dark-colored clothes, so it is more favorable for acquiring skeletons if the background is white. Concerning spatial coordinate of skeletons, it is noteworthy that it is inadvisable to read skeletons from the right left or right, or else the skeletons will be overlaid and thereby impact intended explorations.

4.2 Construction of Virtual Environment

4.2.1 Scenario and Human Body Modeling

  1. (1)

    Model of swimming stadium

    The model of swimming stadium is made up of different parts, and the scenario model may be built as follows: The bottom of the swimming pool is modified by a built-in cubic model. Parameters such as “length”, “width” and “height” are set to be 500 cm, 400 cm and 350 cm respectively. First of all, the top surface of the cuboid is cut into four sides which are intersected. Next, the cut middle large rectangle is intruded into the cuboid by “extrusion” to form a pool shape, as shown in Fig. 2.

    Fig. 2.
    figure 2

    3D model of swimming pool

    Concerning water of the swimming pool, material parameters are set as follows: diffuse colors: 149, 178, 222; high gloss: 80; glossiness: 80; opacity: 40. VRay maps are concave and convex with noises. As regards noise parameters, ripple size is controlled at 5 (the lower the value, the smaller and the denser is the ripple. Besides, the ripple is tracked by reflection mapping in combination with light.

  2. (2)

    Human body modeling

    Human body modeling shall be based on physical attractiveness and physiology. Built model of human body is a polygonal model. Three inbuilt models of 3Dsmax are considered as head, upper part of body and breech respectively. Then, they are bridged after simple modification. Subsequently, the parts where human arms are and legs at the crotch form a section where arms and legs can be made. Arms and legs are extruded according to body proportions, to form the initial shape of human body. Next, basic structure of people (including hands and feet) is drawn according to orders like cutting, connection, disconnection, collapse and bridging. A “turbo smooth” modifier is added to an “editable polygon”, in order to add planes to human body and make them smooth. At last, the planes may be modified by repeatedly using orders within the “Paint Deformation”, so that the structure of human body could be more vivid and lively. Once the body is well made, efforts shall focus on hands, head and five organs as well as more detailed processing. In the process of making, it is necessary to ensure each plane has four sides as far as possible, whereas there can be some five-sided planes at turns. For instance, the turns are made by vertexes of five sides at the zygomatic bone. Lines shall be pulled in line with actual orientation of human muscle. For the muscle at arms and legs, there shall be lines perpendicular to their skeletons, so as to be helpful for producing animation of skeleton models. Attention shall be paid to check if there are more points. If yes, extra points shall be promptly eliminated, or else subsequent skinning will be impacted, as shown in Fig. 3.

    Fig. 3.
    figure 3

    Rendered model of swimming stadium

4.2.2 Construction of Unity Virtual Environment

DirectX11 is supported by Unity, which can realize lifelike 3A-rated virtual simulation scenarios in combination with optimized illumination system and ShaderLab, to make the virtual environment more real and vivid by efficiently simulating physical effects such as collision of rigid bodies and gravitation in a lifelike manner [12, 16].

In the entire design process, it is necessary to build a big environment like blue sky in scenarios apart from the earth. Required big environment may be created with the Skybox in Unity. The environment set up by the Skybox may be more realistic. The system itself is equipped with a resource pack of the Skybox, inside which there are six different materials of sky. In addition, materials of the Skybox necessary for design may be drawn according to mapping laws of the Skybox. Here, it is noteworthy that the sky can’t be rendered unless the Skybox through “Render Settings”. It may be rendered by clicking Edit→Render Settings to set options of Skybox Material.

In designing required Skybox, a new material sphere may be created by opening files Assets→Materials on the panel of Project, and the Shader properties inside the properties viewer are changed into Skybox. Import successive pictures that have been drawn pursuant to cube rules into the pic by opening the file titled Assets. Change their property from Wrap Mode into Clamp and then put them in the Materials based on pertinent orientations to render special environment [17, 23].

In Unity 4.x, water effects are created with a water resource pack known as Water (Pro). In this resource pack, there are two instances in Water4, including daylight and nighttime water, among which the latter appears to be darker. Drag water into the Hierarchy Panel and change the Scale of the Transform in the properties viewer to get required water size. Then, drag it to the desired place by a move tool, as shown in Fig. 4 as follows.

Fig. 4.
figure 4

A running scenario

4.3 Realization of Motion Sensing Effects for Games

After lots of literature and online materials, current Motion Sensing technologies are mostly realized by integrating Unity software with Kinect hardware, while the Kinect hardware mainly transmits data by following methods [24]:

  1. 1.

    KinectWrapper.unitypackage (Carnegie Mellon) plug-in;

  2. 2.

    OpenNI_Unity_Toolkit-0.9.7.4.unitypackage plug-in;

  3. 3.

    Zigfu plug-in;

  4. 4.

    Adevine1618 plug-in;

  5. 5.

    .dll plug-ins written and packaged in C#, C++ or Java.

After advantages and disadvantages of above plug-ins are analyzed, data of Unity software environment and Kinect equipment are transmitted by KinectWrapper.unitypackage plug-in and user-defined packaging plug-ins in this paper.

This paper focuses on illustrating Unity and Kinect data transmission processes of KinectWrapper.unitypackage plug-in. Other software environment includes Kinect SDK 1.7 and Unity 4.5.1.

4.3.1 Key Motion Sensing Technologies

KinectWrapper.unitypackage plug-in contains multiple scripts, which play different roles in data transmissions of Kinect equipment and Unity software, as shown in Table 1 as follows.

Table 1. A List of KinectWrapper.unitypackage Plug-in Scripts

Apart from scripts, Kinect Prefab is critical for data transmissions of Unity3D software and Kinect equipment. In utilizing KinectPointController and KinectModelControllerV2, the prefab shall be put inside the exposure variable Skeleton Wrapper (Sw).

4.3.2 Discussion of Model Movement

In this design, an interactive function is realized that “matchstick man” (prefab KinectAvatar) of KinectWrapper.unitypackage follows the Kinect equipment to acquire information about people’s movements. All points on the Hierarchy panel are put to match with corresponding variables exposed by KinectPointController. Pertinent script “man.cs” is written to control the movement of the “matchstick man”. Modify KinectPointController, and grant the vel value of “man.cs” to be “1”, so as to modify “goTransform.position” of “man.cs”. The “matchstick man” may move according to the value of “goTransform.position” of “man.cs” for realizing the interactive function that the “matchstick man” moves with people.

4.3.3 Motion Sensing Design of Swimming

The Kinect Prefab shall be placed inside the Hierarchy panel and the KinectModelControllerV2 needs to be put at the parent hierarchy. Place skeletons of this model corresponding to variables in exposed skeleton variables and drag the Kinect Prefab into the exposure variable Sw of KinectModelControllerV2 on the human model. Here, it is noteworthy that only 20 skeletons are controlled by Kinect, but more than 20 are under the control of KinectModelControllerV2. Therefore, variables of wrists, hands and fingers are supposed to be put inside skeletons of hands. It will be better for movement when variables of ankles and feet are placed inside such models. To control two models of human body, Player variable shall be set. Mask variable may be set if not all skeletons are expected to be controlled. The model may play its original animations as long as “Animated” is chosen, and BlendWeight (0~1) shall be set. The value of BlendWeight may be used for determining effects of synthetic motions achieved by animation and Kinect-driven motions, as shown in Fig. 5.

Fig. 5.
figure 5

Values of exposure variables of KinectModelControllerV2

After skeletons are bound, they shall be operated in combination with Kinect. In this way, the model can move with people.

5 Conclusions and Suggestions

5.1 Conclusions

  1. (1)

    Virtual reality technologies are used for creating lifelike 3D environment or atmosphere for swimming that people can feel as if they are personally on the scene according to sports forms and swimming environment.

  2. (2)

    In combination with Motion Sensing interactive technologies, virtual interactive interaction is made possible for swimming, to create a brand new sports pattern.

  3. (3)

    Various movement orbits of human motions are acquired from virtual environment with Kinect equipment, in order that three-dimensional models of human body can move in virtual body-building scenes.

5.2 Suggestions

  1. (1)

    Further improvements need to be made in creating virtual reality environment, and virtual equipment shall be added, so as to develop more vivid virtual realities.

  2. (2)

    Motion Sensing technologies are utilized, in order that virtual Motion Sensing interactions may be realized in more sports.