1 Introduction

Virtual Reality Platforms based on Low-cost 3-Dimensional Head-Mounted Displays(HMD) such as the Oculus Rift [1] are becoming popular in a variety of fields which utilize the virtual reality environments. Development of User Interface (UI) for the VR HMD platform has become one of the hottest issues both in the research and industry fields. Control input through gesture recognition have likely been to become a major method of user interface. There are many prior researches to integrate gesture recognition into a command and motion interface for the VR HMD Platforms. Andrea et al. proposed the hardware as well as the software for sensing 3D gestures to interact with head-mounted display [2]. Similarly, O. Jason et al. proposed a new intelligent text management system that actively manages the movement of text in a user’s field of view by recognizing the hand gestures [3]. Moreover, Otmar et al. presented 3D interactions with a situated see-through display [4]. However, one of the biggest hurdles in the current approach is that the display device cannot visualize the users’ motions on the non-see-through HMD since an entire environment beyond the device is invisible during the operation [5]. Although, various research has been done or in progress to overcome the limitation, it is still an inevitable matter to have the method to control the user interface on the non-see-through head-mounted displays. Shahram et al. proposed an initial version of multi-modal gesture interface invariant to the visibility of the display, and they constructed the scenario list to architect the gesture interface [6]. However, the conventional gesture scenarios are based on the hand motions or pose without considering the functional combination of traditional touch and the cursor interface of the mouse movements [7, 8]. As far as the authors know, means to input user’s intention with synchronization of movement in non-see-through HMD has not been reported yet. In this paper, we propose a user interface named Cybertouch, for 3D VR HMD providing input command interface with fully synchronized movement motions using wearable IMUs (inertial measurement units) and EMG (Electromyograph) sensors to enable real-time selection of operational commands together with recognition of user motions and gestures. A detailed concept of the proposed interface will be presented in the Sect. 2. Proposed prototype with experimental results is explained in Sect. 3, and proposed work is summarized conclusion in the Sect. 4.

2 Proposed Cybertouch Interface

We defined eight commands with a combination of gestures and motions. The major consideration when we determine the set of commands and corresponding gestures is that they should reflect the native behavior of human in order to provide experiential, affective and practical aspects of Human Computer Interaction. We use both EMG sensors and IMUs as an input device to detect hand motions and poses of fingers. There is a limitation in sensing capability to detect EMG signal related to the each finger motion by using EMG sensors attached at the forearm. By data fusion of IMU data and EMG signal, we can recognize commands by matching the combination of hand-motions and poses as shown in the Table 1. In Cybertouch, the eight essential commands consist of both basic control commands of applications and operating system. For example, Grab-Drag-Drop is a command which can be used to move an object in the game application, by the way, Grab-Drag-Drop can be used to move an icon or file for performing an OS task. Escaping is a special control command can be used to quit or exit from running environment. Table 2 shows comparison of conventional devices and proposed Cybertouch.

Table 1. Definition of commands and corresponding hand gestures and motions
Table 2. Comparison of conventional devices and cybertouch

3 Configuration and Experimental Results

The configuration of system interface using proposed Cybertouch is illustrated in Fig. 1. The Event Listener receives the packets from IMUs or EMG devices, and it perceives the events by analyzing the packet from the wearable input devices. The detected events are dispatched to the Command Recognizer. The Command Recognizer generates input commands by comparing incoming sets of events with predefinition set of commands and gestures as shown in Table 1. The Cybertouch input commands can dispatch to OS kernel by using a message function call of the OS kernel API. The primary goal of the Cybertouch command is providing control commands of applications.

Fig. 1.
figure 1figure 1

A brief illustration of the proposed Cybertouch configuration

However, the command input to OS kernel makes it is possible to control the system while wearing a non-see-through HMD without using mouse or touch pannel.

To demonstrate the proposed Cybertouch system, we developed a VR simulator, which requires VR HMD and provides a user experience of Cybertouch commands. To configure the system, we used two IMUs with EMG sensors attached at both left and right forearms, as shown in Fig. 2.

Fig. 2.
figure 2figure 2

Experimental setup of Cybertouch, the proposed touch and cursor interface for the VR HMD platform using IMUs and EMG sensors for gesture and motion sensing.

4 Conclusions

We have proposed a platform, named Cybertouch, for 3D Virtual Reality Head-Mounted Display (VR HMD) providing traditional input control commands of both touch and cursor interface with fully synchronized movement motions using wearable IMUs and EMG devices to enable real-time selection of operational commands together with recognition of user motions and gestures.

Proposed Cybertouch provides a User Interface which is a kind of combined functions of traditional mice and touch panel devices, specialized in games or immersive virtual training applications using 3D HMD. Our in-progress interface of Cybertouch and embedded application will be demonstrated at the conference.