Keywords

1 Introduction

Open source software systems are becoming very interesting to develop commercial and industrial solutions in several fields. Usually, they are provided from communities that continuously update the software architecture with new modules. The high frequency of updates forces the community to follow a rigid versioning of own code repositories and, thus, each new released version is always totally tested and improved through bug fixing. High control quality of open source code becomes a good starting point to develop commercial applications. This is true for the development of virtual environments in several research and industrial contexts in which the basic features are common, such as the interaction with 3D environment using head mounted displays (HMDs), hand-tracking devices, motion capture (Mocap) systems, and so on.

HMDs allow the visualization of 3D environments by emulating the depth sense using stereo cameras. At present, there are several HMDs that make available a software development kit (SDK) to interface the device with a custom application, such as Oculus Rift [1], HTC Vive [2] and Google Cardboard [3].

Hand-tracking devices permit the detection of hands and fingers with high precision. Among many solutions, the most interesting ones are the Leap Motion device [4] and the Duo3D [5].

A Mocap system is able to record movements of objects and people. Recorded actions of the human body are used to animate the virtual avatar of a person and can be used to study the cinematic behavior (e.g., gait analysis). Among the different Mocap systems, the optical ones are attracting more and more interest. They can be subdivided in two main categories: marker-based and marker-less. The marker-less systems can be based on low-cost devices such as, Microsoft Kinect v1 and v2 and Sony PS Camera.

The aim of this paper is to introduce an overview of main low-costs systems (both software and hardware), which are used to create virtual and mixed reality applications for research and industrial contexts. The paper shows how it is possible create several software interfaces between different open-source SDK in order to create application for custom-fit products. According to this approach, three applications have been developed and briefly described.

2 Overview of Main Open Source Systems and Low-Cost Devices

Open source systems have been selected on the base of the experience matured in the development of innovative virtual and mixed reality applications. Selected systems have been classified and subdivided according to the final purpose of the application.

2.1 Basic 3D Environments

The software development of a basic 3D environment needs a module to develop the graphical user interface (GUI) and a module to manage the 3D environment. Among many possible solutions, two important open source systems allow developing a complete 3D basic environment in a simple and fast way:

  • Qt, used for developing multi-platform applications and GUIs [6]. It is a cross-platform application framework to develop software application that can be run on various software and hardware platforms with little or no change in the underlying codebase. Qt is widely used by many organizations, including but not limited to European Space Agency, Panasonic, Philips, Samsung, Siemens and Volvo.

  • VTK (Visualization Tool Kit), used to manage the 3D rendering. It supports a wide variety of visualization algorithms and advanced modeling techniques [7]. At present, VTK is used worldwide in commercial applications, research and development and represents the basis of many advanced visualization applications such as: Molekel, ParaView, VisIt, VisTrails, MOOSE, 3DSlicer, MayaVi, and OsiriX.

2.2 Low-Cost Devices for Virtual/Mixed Reality Environments

To interact with mixed environments, the developer has to create a set of software interfaces among the devices and the 3D environment. In our approach a virtual/mixed reality environment application comprises:

  • Hand-tracking using Leap Motion SDK [4, 8]. It makes available a set of modules to easily detect a broad type of gestures. To make the interaction simple and comfortable, a Natural User Interface (NUI) has been studied and designed [9,10,11]. An ad-hoc module has been developed that extends the class of VTK to interact with the virtual environment of the final application [12].

  • Immersive vision in a 3D virtual world using Oculus Rift SDK 2.0 [1, 13, 14]. The mixed reality environment is automatically visualized in the user’s field of view of the Oculus Rift when the 3D object is detected and, thus, the user can start to execute his/her work using hands/fingers detected by the Leap Motion [15, 16], which is mounted on the front of the Oculus Rift as shown in Fig. 1.

    Fig. 1.
    figure 1

    Leap motion device mounted on oculus rift.

2.3 Motion Capture System

A motion capture system allows tracking human motion in space and analyzing acquired data for detecting key-features useful inside the developed application. We consider a marker-less Mocap solution, which uses multiple low-cost depth cameras [17,18,19] (Fig. 2(a)) such as Microsoft Kinect v2 (Fig. 2(b)) [20].

Fig. 2.
figure 2

(a) Layout of Mocap system based on multiple Kinect v2; (b) A Microsoft Kinect v2.

The solution uses a commercial application, named iPisoft suite [21], to manage acquired data. It is composed by two applications: iPiRecorder and iPiStudio. IpiRecorder allows data recording acquired by two Kinect v2. IpiStudio imports acquired data in a virtual environment and allows the data elaboration and the creation of the avatar virtual skeleton, which can be exported in several file formats, among which BVH and FBX.

Once the file has been exported, an application to analyze the acquired animation can be developed by using several modules of the open source platform named Blender [22]. Blender is an application for 3D modeling and animation and it can be adopted as a module inside a custom application for specific purposes. In our context, it is used to manage body shape animations and automatic association of an animation to the 3D human avatar to define different body postures. Blender makes available several features very useful to simplify the use of a motion animation acquired by a Mocap system.

3 Software Interfaces

Several software interfaces are required in order to create a proper data exchange among the devices used in a mixed reality application. Depending on the type of applications, at least two software interfaces have to be developed: (i) the synchronization of different coordinate systems related to the 3D environment and the chosen devices (e.g., the head mounted display and/or the hand-tracking device); (ii) the interface between new software modules and Blender for body mesh animation.

In the following sections we introduce the solutions we adopted for the aforementioned interfaces.

3.1 FrameworkVR

FrameworkVR is a general-purpose software library, fully independent of the application the developer wants to implement. It allows using HMD and hand tracking devices inside an application where the 3D environment has been implemented by using VTK and Qt. It automatically manages the synchronization among the orientation systems of VTK, and the interaction devices SDK. Furthermore, it makes available a set of software modules to create a Natural User Interface by following the finite state machine (FSM) approach. A set of virtual widgets has been developed in order to simplify the design of the NUI [23].

Figure 3 shows the UML diagram, which describes the interface of FrameworkVR with the Oculus Rift HMD and Leap Motion for hand tracking.

Fig. 3.
figure 3

UML diagram: interface with the oculus rift and the leap motion.

3.2 Blender in a Custom Application

In addition to the augmented interaction, some applications require also the possibility to work with the model of human body (acquired with a 3D scanner) assuming different postures according to the goal of the application. Therefore, the system should be able to associate data coming from a Mocap system with the human body model (usually a polygonal mesh). Blender makes available a set of automatic and semiautomatic features in order to perform mentioned tasks. These operators are named rigging and animation retargeting. The 3D rigging relates group of vertexes of the body mesh to the nearest bone composing the skeleton acquired with the Mocap solution. When running the animation, the body mesh accurately follows it. Retargeting permits the translation of an animation from a skeleton to another one, which can be composed by either the same or different set of bones. This is necessary when the skeleton of the acquired human avatar is different from the skeleton used for animation. This is mandatory when the animation is acquired with a marker-based motion capture system.

The functionalities of Blender are available through software development in Python. Furthermore, C++ embeds Python in a very simple way and, thus, the developed application exploits the functionality of Blender efficiently interfacing C++ classes and the other SDKs and Python modules.

4 Developed Applications

FrameworkVR as well as mentioned devices, SDKs and open sources tools have been exploited to develop three applications in different contexts. The applications concern highly customized products that are designed around the human body: lower limb prosthesis and made-to-measure garments. Figure 4 portrays the aforementioned tools and devices that have been exploited for each application.

Fig. 4.
figure 4

Mapping among applications and devices and software tools.

4.1 Tailor LABoratory - TLAB

Taylor LABoratory (TLAB) permits to emulate tailor’s tasks [24]; in particular, we focus the attention on the first step of the design process during which the tailor gets the measures of the customer’s body, also in specific postures [9]. As shown in Fig. 4, TLAB exploits FrameworkVR to synchronize VTK, OculusRift and Leap Motion SDKs, and uses a Mocap system and Blender to define the different body postures necessary to get right customer’s measures. A NUI has been designed in order to take measures using hands along the 3D human body model. For this aim, a virtual tape measure has been developed specializing a VTK widget to emulate the real one used by the tailor.

Blender has been used to manage body animations and automatic association of an animation to the 3D human avatar. The human being is acquired using a 3D body scanner (e.g., Kinect v1 and Skanect) and his/her human motion with a Mocap solution composed of two Kinect v2 and IpiSoft.

Then, the acquired skeleton is linked to the 3D human avatar in the correct position and the vertex groups are generated and populated according to the position of each vertex to the nearest bone of the skeleton. When the automatic 3D association is completed, the skeleton can be moved and the 3D human avatar is animated accordingly, and needed postures generated.

In addition, Blender is also used to export/import 3D models in several animation format, such as BVH, DAE so that the 3D human body can be used within a 3D clothing system to design a made-to measure garment for the specific customers.

A set of body postures (composed by BVH files) can be easily generated and the user/tailor can get the measures by hands interacting with the 3D human model by interacting with the Leap motion. Figure 5 portrays the described workflow.

Fig. 5.
figure 5

TLAB workflow. Measures refer to a men shirt

4.2 Virtual Orthopedic LABoratory - VoLAB

VoLab is a mixed reality environment to design lower limb prosthesis based on a knowledge based CAD system known as SMA, i.e., Socket Modelling Assistant [25]. The architecture of VoLAB (Fig. 6) consists of: three Kinect v2, Oculus Rift v2.0, Leap Motion device placed on the front side of Oculus Rift and a Personal Computer that runs SMA and manages the synchronization of devices through the middleware.

Fig. 6.
figure 6

VoLAB hardware architecture.

SMA has been developed to design the 3D socket model of lower limb prosthesis according to operations made by technicians during traditional hand-made manufacturing process. It provides a set of virtual modelling tools starting from the 3D model of the residual limb and anthropometric data of the patient. Among the several operations to model a socket, the load/off-load zones definition and the trim-line sketching (i.e., the upper part of the socket) are the most important ones.

The SMA 3D environment, which have been developed using VTK, and the interaction paradigm have been totally re-designed according to the use of Oculus Rift and Leap Motion device and synchronized thanks to FrameworkVR. Also in this case, a NUI has been defined as well as a set of gestures and 3D virtual widgets with which the user can design the final socket for a lower limb prosthesis. Figure 7 shows the VoLAB virtual tools with which the user can interact using his/her hands and, thus, emulates operations traditionally done during the manufacturing process.

Fig. 7.
figure 7

VoLab virtual tools

4.3 Gait Laboratory – GLab

GLab is a virtual gait analysis tool that merges the 3D model of the residual limb acquired by means of 3D scanners (e.g. a laser scanner) or diagnostic device (e.g., Magnetic Resonance Imaging and Computed Tomography) with pressure data acquired during amputee gait. Pressure data are acquired by using Tekscan system (Fig. 8(a)) [26] while the gait using a marker-less Mocap system composed of two Kinects v2 and iPiSoft (Fig. 8(b)) [27].

Fig. 8.
figure 8

Tekscan sensors applied on the residual limb (a). Gait acquisition with Mocap system while the patient wears pressure sensors (b).

The application has been developed using Qt and VTK. It permits the automatic mapping of pressure data on the residual model and detection of gait abnormalities. First, each pressure sensor on the residual limb model is marked with a color. This operation allows defining the sensors stripe area and mapping them over the virtual 3D model. The user assigns a different color to each stripe to exclusively distinguish each one from the other ones (Fig. 9(a)).

Fig. 9.
figure 9

Mapping of the pressure values gathered with Tekscan sensors.

Then, each single sensel (i.e., a single sensor cell along the stripe) is mapped with a 3D surface on the virtual residual limb model. Once the data have been mapped, values measured by each sensel of each sensor stripe can be quickly visualized during the animation of the gait. Figure 9 shows an example of resulting pressure color map (Fig. 9(b)).

On the other side, the application imports the animation relative to the acquired gait. This permits to synchronize pressure data and animation of gait analysis in order to study possible correlation between pressure and different phases of gait cycle.

When pressure mapping has been done and the gait data animation has been imported the user can easily analyze the gait phases in order to detect eventual criticalities. Finally, the application proposes modifications to the prosthesis settings according to detected abnormalities.

5 Conclusions

Smart and low-cost devices coming from the gaming industry are able to provide interesting performances in research and industrial 3D applications. By the way, they are not intended to ease data exchange and to be used together; moreover, there are 3D modelling and animation libraries and tools that can be fruitful exploited. The paper describes a set of low-cost and open source solutions that can be used for developing virtual environments. Three applications have been developed exploiting open-source software and low-cost devices. Two applications use Oculus Rift and Leap Motion devices in order to create a virtual environment in which the user can interact by using his/her hands to design the custom-fit product. The third one uses a low-cost makerless Mocap system.

The developed applications are under evaluation by experts of each related sector and preliminary feedback have been very promising. The use of open source systems has permitted to dramatically decrease the development time of the applications and increase the quality of the final software architecture due to a smaller number of software bug during the software test before the final deployment of the application.