For testing purposes, we implemented an Android application for gathering touchscreen pointing events and the corresponding timing data. The application stores in the CSV format on the device’s internal SD card the measured results along with the information about user ID and utilized interaction style. When speaking of interaction style, we are referring to a combination of hands posture and device orientation while executing pointing tasks. Specifically we investigate thumb-based pointing performance on portrait oriented smartphones, as well as forefinger-based pointing on smartphones and tablets in both portrait and landscape orientation (Fig. 2). Forefinger-based pointing corresponds to the use case wherein one hand is holding the device, while the other – usually the dominant one – performs the pointing task.
Target pointing tasks for Fitts’ law verification are easy to implement, as a single task instance only needs a designated starting point and a given target. However, unlike the usual approach which assumes predefined sets of distances A and target sizes W (cf. [5–7, 9–13]), our application randomly generates pointing tasks according to the: (i) mobile device screen size, (ii) position and size of the starting point, and (iii) defined margins for rectangular target width (Fig. 3).
Specifically, for five possible starting touch areas (four in the screen corners, and one in the middle) a random set of rectangular shapes is generated, representing pointing targets, whose distance from the starting point A and target size W jointly form particular ID values with a resolution of 0.5. The smaller dimension of the rectangular shape is considered as the actual target width, hence Fitts’ law revision that we want to evaluate here is the well-known MacKenzie-Buxton smaller-of model. The example presented in Fig. 3 can be elaborated in more detail. If the task generator can produce 7 random tasks with ID values between 1.0 and 4.0 for each corner-positioned starting point, as well as 6 random tasks with ID values between 1.0 and 3.5 for the starting point located in the middle of the screen, this makes a total of 34 pointing tasks covering a wider range of finger movements on a particular display. We believe that the testing cycle thus designed could provide a better representation of the user’s real pointing scenarios with respect to the “static” design including a single starting point and a smaller set of predefined A × W combinations.
The time measurement is implemented with the SystemClock.elapsedRealtime() method, as the respective clock is guaranteed to be monotonic, tolerant to power saving modes, and is anyway the recommend basis for general purpose interval timing on Android devices . The time taken to complete the required movement in a pointing task is considered to be the interval between a tap-up action inside the starting point and a tap-down action within the target shape area (Fig. 4).
In our empirical research 35 users were recruited (28 males, 7 females), their age ranging from 21 to 31, with an average of 23.1 years (SD = 2.2). Only two of them were left-handed. While every user confirmed her/his adequate experience in operating touchscreen smartphones and tablets, 80 % of them declared an Android-based device as their own personal gadget.
We used four different mobile devices (D1–D4) running the Android OS in the experiment, two of which were from the smartphone class (D1, D2), and two from the tablet one (D3, D4). For every form factor (smaller smartphone, larger smartphone, smaller tablet, and larger tablet) we defined configuration parameters for the pointing task random generator: dimensions of the starting point area and threshold values for target sizes. Both the display characteristics and suitable target dimensions were considered in this procedure, providing different task ID range for each device. As expected, larger devices enable pointing tasks with wider ID range. Details about all used devices and tasks configuration parameters are presented in Table 2.
In order to familiarize with both available devices and testing application features, users were involved in a short practice session at the beginning of testing. In the actual experiment participants were instructed to input their unique identifier, and to complete a given cycle of randomly generated pointing tasks for each combination of available device (D1–D4) and appropriate interaction style (thumb/portrait, forefinger/portrait, forefinger/landscape). The time between two task instances within a cycle, when no actual pointing was performed, was not measured anyhow. Cycles consisted of 34, 38, 43, and 48 pointing tasks for each interaction style used on D1, D2, D3, and D4 respectively. If a particular target was missed, a new task instance with the same ID was generated. In order to further differentiate the starting point from the target area, related rectangles were being marked with numbers 1 (starting point) and 2 (pointing target). The start and the end of the testing cycle were acknowledged with appropriate application messages. Although the learning effect seemed to be negligible for simple touchscreen pointing tasks in our experimental setup, the sequence of experimental conditions, i.e. both device order and interaction style order, was nevertheless counterbalanced.