1 Introduction

The use of motion sensing for input devices such as the Nintendo Wii remote and Microsoft Kinect is becoming increasingly popular. In particular, hands-free gesture input is promising for such devices. Research efforts have often been devoted to ray casting and its variants [5] that are pointing techniques based on the sensed directions of users’ hands or fingers. However, a different approach can also be taken for pointing by using the sensed positions of users’ hands or fingers, that is, mapping the sensed positions to two-dimensional (2D) or three-dimensional (3D) coordinates in virtual spaces [4]. Along this line, previous research on 3D pointing [3] compared a tablet-like absolute positioning-based technique with a mouse-like relative one, and experimentally showed that the tablet-like one was better.

We propose a mouse-like hands-free gesture technique for 2D pointing. It is characterized as follows:

  1. 1.

    A user horizontally moves his/her hand to position a cursor shown on a vertical screen (Fig. 1(a));

  2. 2.

    The user activates cursor movement by opening his/her hand, and deactivates it by clenching;

  3. 3.

    The user performs target selection by “clicking” in the air with his/her index finger (Fig. 1(b));

  4. 4.

    The user is assisted in quick but precise cursor movement by automatic acceleration.

We performed a user study to experimentally compare the mouse-like technique with a tablet-like one. We studied eight subjects who performed target selection tasks. A two-way ANOVA on the experimental results indicated a significant difference between the two techniques. Six out of the eight subjects answered that the mouse-like technique was easier to use.

Fig. 1.
figure 1figure 1

Mouse-like pointing technique.

2 Proposed Technique

This section proposes a mouse-like hands-free gesture technique for 2D pointing.

2.1 Cursor Movement

Most of hands-free gesture techniques for 2D pointing can be grouped into two categories: ones based on the sensed directions of users’ hands or fingers; the others based on the sensed positions of hands or fingers. The latter category can be further grouped into two approaches: the tablet-like approach that uses absolute positioning of hands or fingers; the mouse-like one that adopts relative positioning of hands or fingers.

We propose a mouse-like pointing technique. Our technique is unique in that it utilizes the horizontal movement of a user’s hand to obtain a cursor position on a vertical screen as shown in Fig. 1(a). More specifically, if the user moves his/her hand toward the screen, the cursor moves upward on the screen; if the user moves his/her hand toward the user, the cursor moves downward on the screen; if the user moves his/her hand to the left or to the right, the cursor moves in the same direction. We can expect that, compared to the vertical movement, the horizontal movement of the user’s hand reduces his/her fatigue. Although the movement of the hand and the corresponding movement of the cursor are not parallel, this situation is common to existing 2D pointing devices such as mice and touchpads.

Since our pointing technique is based on the mouse-like approach, the cursor moves on the screen only while it is activated. For this purpose, we let the user make the following hand gestures: to activate the movement of the cursor, the user opens his/her hand; to deactivate it, the user clenches.

In addition, we incorporate a cursor acceleration method that is common to existing 2D pointing devices to assist users in quick but precise cursor movement. In the case of mice, cursor acceleration uses a nonlinear mapping from the velocity of a mouse to the velocity of the cursor. More specifically, if the user moves the mouse slowly, the cursor moves slowly on the screen; if the user moves the mouse more quickly, the cursor moves much more quickly. This enables the user to do almost stress-free cursor movement on a large screen while enabling the user to do careful cursor movement if necessary. Since our pointing technique is based on the mouse-like approach, we can naturally incorporate cursor acceleration into our technique.

2.2 Target Selection

We adopt a hand gesture again to let the user select a target on the screen. Specifically, the user selects the target by “clicking” in the air with his/her index finger as shown in Fig. 1(b). Since the user moves only his/her index finger without needing to move his/her hand, we can expect that this operation has almost no negative effect on the cursor movement operation. Also, we can incorporate other operations in the same way by using other fingers than the index finger.

3 Implementation

We implemented the proposed technique by using the Leap Motion controller [4]. The controller uses infrared LEDs and cameras to recognize hand gestures that a user performs above it. We implemented the necessary program as a Windows Presentation Foundation application in the C# language of Microsoft Visual Studio 2013. The program consists of 850 lines of code.

We used the position of the user’s middle finger as the position of the user’s hand. To identify whether the user opens or clenches his/her hand, we used Leap Motion’s “confidence” data. To identify whether the user “clicks” with his/her index finger, we used its relative height compared to the average height of his/her ring and little fingers.

We implemented mouse acceleration by simulating that of Windows XP [1]. Specifically, if the distance of the movement of the user’s hand per frame is less than 0.44 mm, the movement of the cursor is 12.0 pixels/mm; if the hand movement is between 0.44 and 1.25 mm, the cursor movement is 18.1 pixels/mm; if the hand movement is between 1.26 and 3.86 mm, the cursor movement is 27.5 pixels/mm; if the hand movement is more than 3.86 mm, the cursor movement is 56.8 pixels/mm.

4 Experiments

This section describes the experiments that we conducted to evaluate the performance of the proposed mouse-like technique.

4.1 Method

We experimentally compared the proposed mouse-like technique with a tablet-like one. The tablet-like technique directly maps the absolute position of a user’s hand to coordinates on a screen (Fig. 2(a)), and enables the user to perform target selection by “tapping” in the air (Fig. 2(b)).

Fig. 2.
figure 2figure 2

Tablet-like pointing technique.

We designed a task to evaluate the performance of users’ pointing by using the mouse-like and the tablet-like technique. In this task, a user selects targets that repeatedly appear at random positions on the screen. The position of a new target is always at most 1000-pixel distant from the previous one. The size of the target is 74 \(\times \) 100 pixels that are almost equal to the size of a standard icon on Windows with a 1920 \(\times \) 1080-resolution screen. We designed this task in a similar way to that for a predictive cursor movement technique [2]. However, the behavior of targets in our task is more random than that for the predictive cursor movement technique.

We studied eight subjects who performed the target selection task. All the subjects were male and right-handed, and their average age was 21.8 years. Half of them used the mouse-like and the tablet-like technique in this order, and the others used these techniques in the reverse order. They had a five-minute practice and a five-minute rest before and after the task respectively. A subject repeated the task 100 times for each technique, that is, 200 times in total. The needed time of an experiment for a subject was approximately 30 min. We used a 1920 \(\times \) 1080-resolution LCD display. We recorded the time for each task, the number of misclicks, and the position of each target. After the experiment of each subject, we made an inquiry with five-grade assessment about the usability and the fatigability of the two techniques.

4.2 Results

We compared the two techniques by analyzing the data that we obtained from the experiments. To simplify the analysis, we classified target distances into ten levels. The two-way ANOVA on the two techniques and the target distances indicated significant differences between the two techniques (\(F(1, 1580) = 3.900\), \(p < 0.001\)) and between the target distances (\(F(9, 1580) = 7.006\), \(p < 0.01\)). However, no interaction occurred between the two techniques and the target distances. Figure 3 plots relations between target distances and pointing times. The bars indicate the average times of all the subjects, and the error bars indicate the standard deviations. The mouse-like technique resulted in smaller standard deviations than the tablet-like one for most of the target distances.

Fig. 3.
figure 3figure 3

Results of experiments for comparing the mouse-like and the tablet-like pointing technique.

Both techniques resulted in the average error rates of approximately 40 %. By contrast, the average error rate for the same task using a mouse was approximately 4 %. This indicates that the hand gesture techniques were less stable than mouse-based operations. The total numbers of misclicks were 774 for the mouse-like technique and 862 for the tablet-like one.

In the inquiries after the experiments, six out of the eight subjects answered that the mouse-like technique was easier to use. Four out of the eight subjects answered that the tablet-like technique had caused more fatigue than the mouse-like one. Also, we received comments that they had experienced difficulty in selecting targets located near sides of the screen. Both techniques obtained only a few high evaluations, which indicated that subjects had felt stress in hands-free gesture pointing.

5 Conclusions and Future Work

We proposed a mouse-like hands-free gesture technique for 2D pointing. It uses the horizontal movement of a user’s hand, and adopts hand gestures to enable relative cursor movement and target selection. We experimentally evaluated the performance of the proposed mouse-like technique by comparing a tablet-like one. We studied eight subjects, and found that the mouse-like technique had reduced pointing times and obtained more stable operations. Also, the inquiries after the experiments indicated that the mouse-like technique was easier to use and caused less fatigue.

Our future work includes improving the recognition of hand gestures. There are still many hand gestures that cannot be precisely recognized. To enhance the usefulness of hands-free gesture pointing techniques, we need more precise recognition of hand gestures. Other future work is to devise a better method for evaluating gesture pointing techniques since the experiments that we performed were originally designed for mouse-based pointing techniques.