Keywords

1 Introduction

Prior research findings show 74% users prefer to use mobile phones with one hand [13]. One handed usage of handheld devices is commonly observed in situational impairments, e.g. walking, sitting and standing where the other hand is busy doing real- world tasks [2]. Hence, the design of suitable input interactions for one-handed usage becomes critical in common scenarios of situational impairment.

The standard handheld devices pose several challenges during one-handed usage including limited reachability [9], re-gripping of the device [3], reduced accuracy [3], and increased occlusion [3]. These problems are primarily attributed to factors like the constantly increasing size of the device, hand-thumb morphology and interactions that demand the attention of both hands to perform certain tasks [7]. This leads to difficult user interactions with handheld devices, especially for the tasks that demand multi-finger and/or multi-step interactions such as target zooming [3, 5]. Target zooming is defined as zoom-in and zoom-out to a specific location of an interface (e.g. top-right & top-left corner, bottom-left & bottom-right corner etc.). These locations often demand multi-step interactions by enabling (a) access to unreachable areas and (b) further panning through diagonal, horizontal and vertical finger movements to achieve a user-defined target in an interface. These tasks are commonly performed in applications such as navigational maps and image gallery.

The problems with one-handed usage are not limited to standard handheld devices, but will also be experienced in future flexible handheld devices [13]. Although extensive research is conducted in improving one-handed interactions on standard handheld devices [1, 3, 9, 10], investigations targeting one-handed interactions for flexible handheld devices is insufficiently explored. These investigations are important as flexible devices offer opportunities of deformations that may result in intuitive and efficient one-handed user interactions. The potential of deformation is further amplified by bringing in touch capabilities as it is useful to offset against the inability to perform bimanual interactions in one-handed usage [14].

In this paper, we propose new input interactions – a combination of central bend based deformation gestures and touch supported swipe on back side of flexible handheld device to perform target zooming of an image.

2 Design of Proposed Input-Interactions

To illustrate the use case of target zooming using the BendSwipe interaction technique, we use an infographic image and divide it into a 3 × 3 grid (Fig. 1). BendSwipe uses bend and swipe gesture to manipulate the direction of zooming - directing towards specific location of 3 × 3 grid. The gesture involves central bending the flexible device in order to perform a center squeeze and performing a swipe action at the back of the device simultaneously.

Fig. 1.
figure 1

Division of grid of the infographic image

2.1 Design Rationale

To reduce occlusion and re-gripping events and enable easy target zooming, we chose a combination of bend gesture and touch supported swipe gesture on backside of flexible handheld device. We chose bend gestures due to its strong preference in current literature [1, 11, 13] which are less time consuming, easy to use, and intuitive; hence aiding in performing zooming tasks easily and quickly. We also chose touch based swipe gesture as common holding position in one-handed usage freed the index finger on the backside of the device [12]. Swiping the index finger in specific direction aides in identifying the zooming direction; hence enabling target zooming. We believe that combination of bend and swipe gesture on backside of flexible handhelds can effectively reduce the current challenges of occlusion, reachability and re-gripping and enable target zooming easily and quickly.

2.2 BendSwipe Walkthrough

Zoom-in and out is triggered by performing a bend and swipe gesture simultaneously. Performing a center squeeze which bends the central vertical axis (X-X’ in Fig. 2) towards the user is called bend-in. A bend-in displays the navigation wheel as displayed in Fig. 2(a). Performing a linear swipe gesture in any direction on the back of the device triggers the zoom-in which triggers in the exact direction of the swipe gesture; hence performing a target zoom-in. The zoom-in is performed as long as a touch is detected at the end coordinate of the swipe gesture. For example, to zoom-in towards the area in grid 8 (Fig. 1) the user needs to perform a bend-in, followed by a swipe gesture in the South-East (SE) direction anywhere on the back of the device.

Fig. 2.
figure 2

(a) Bend-in and swiping in the SE direction for Zoom-In in the same direction, (b) Bend-out and swiping in the SE direction for Zoom Out in the same direction

Performing a center squeeze which bends the central vertical axis (X-X’ in Fig. 2) away from the user is called bend-out. A bend-out displays the navigation wheel as displayed in Fig. 2 (b). Performing a linear swipe gesture in any direction on the back of the device triggers the zoom-out which triggers in the exact direction of the swipe gesture; hence performing a target zoom-out. Similar to zoom-in, zoom-out is performed as long as a touch is detected at the end coordinate of the swipe gesture. For example, to zoom-out towards the area in grid 8 (Fig. 1) the user needs to perform a bend-out, followed by a swipe gesture in the SE direction anywhere on the back of the device.

3 Prototype

The flexible prototype is made out of a thin sheet of paper (5.5 size) laminated with plastic sheets on both sides. The plastic coating gives the device the ability to bend and twist easily along with elasticity. A bend sensor is located diagonally (on the backside) to detect bend-in and bend-out of the flexible device (Fig. 3). To detect and track the finger swipe gesture at the back of the device, a deformable and conductive transparent sensor material [8] is used that is placed on the backside of flexible device. The movement as well as the location of a finger can be followed across the cross-grid sensor array that results in the tracking of finger swipe direction. The bend sensor and the conductive sensor material sends the data to the Arduino microcontroller to control the image size using Processing software. Moreover, the display content is integrated with the device by means of a top projection.

Fig. 3.
figure 3

Placement of sensors for BendSwipe prototype

4 Conclusion and Future Work

In this paper, we explored the capabilities of deformation gestures to augment touch gestures in order to extend their functionality for one-handed hand-held usage. We designed a method of target zooming using the central bend gesture and the swipe gesture at the back of the device to overcome the challenges experienced in one-handed usage. These challenges are yet to be explored sufficiently for flexible handheld devices. We presented an input method to zoom-in or out towards any area in a continuous and controlled manner. To evaluate the effectiveness of this interaction technique, we will conduct a comparative evaluation with the standard smartphone. The comparative study will investigate variables (a) task completion time to zoom to several specific target areas and (b) number of re-gripping events while performing the target zooming during the task. We will also measure 4 usability constructs (i) ease of use, (ii) intuitiveness, (iii) ease of learning and (iv) behavioral intention. In addition to the effectiveness of the proposed input interactions, the study can potentially bring some insights on the effectiveness of swipe as a back interaction used along with deformation gesture in one-handed scenario.