4.1 GlassNage Application Statistics
To evaluate GlassNage mobile application performance, we used Android application analysis tools available in the Android SDK. We deployed GlassNage in a Glass-like wearable device with OMAP4430 2 GB RAM dual-core System-on-Chip (SoC), on Android 4.4.2 operating system. Our layout recognition performs in real-time at up to 8 fps for 1280 × 720 (720 p) resolution. The camera of this device had 54.8 degrees horizontal and 42.5 degrees vertical Angle-of-View. A series of quantitative and qualitative user study were conducted to test the feasibility of GlassNage approach. The results are presented in the following subsections.
4.2 Focus of the User Study
In our experiments, we used GlassNage mobile application that was deployed in a Glass-like wearable device. We identified a usability issue when a user is trying to frame the content that they are actually seeing with the camera frame. I.e., natural users’ Field-of-Vision (FoV) do not align well with the camera’s Angle-of-View (AoV). This is mainly caused by:
-
1.
There is only a single camera, i.e. not stereoscopic, which does not compensate 3D gaze.
-
2.
The position of the camera is in the front-right part of the frame, which does not match with human FoV’s centroid.
-
3.
The camera only has a relatively narrow AoV (54.8 degrees horizontal and 42.5 degrees vertical), compared to in total of 124 degrees FoV of the human.
We illustrate our finding in Fig. 5. Based on this finding, we first focused in indicative factors of users targeting behaviour and also their accuracy. The second focus of our study was to assess users’ perceptual workload when using GlassNage.
4.3 Quantitative User Study
We conduct a series quantitative user study to test users behaviour as well as their accuracy when trying to align their perceptual FoV and Glass device’s camera AoV.
Participants. We recruited six participants for this study. The participants were all from outside of our research organization. They were 4 male and 2 female, age 25.4 ± 4.21 years old. All of the participants were familiar with the Glass-like wearable device, and were confident of wearing, seeing the display, and interact with the side-mounted touch panel.
Procedures. Firstly, we instructed the participants to stand 2 meters in front of a 60-inch monitor that was pivoted vertically (resembles the setup depicted in Fig. 1 left). The monitor was displaying signage content that previously described in Fig. 3. We assigned 11 sections in the signage, and presented content related to coffee shop menu, news, weather, etc. Second, we instructed the participants to comfortably center their head posture and FoV, as well as to focus their eye gaze onto a designated section, promptly followed with taking a picture with the Glass-like wearable device’s camera app. We asked each participant to perform this task for all 11 sections of the signage, and repeat this series of tasks for 5 times.
Data Statistics. For each signage section, we obtained 5 images from each participant; i.e., in total we have 30 images capturing the same signage section. The images were taken using the 5MP camera of the Glass-like wearable device, which translates to 2528 × 1856 pixel resolution. We imported the images from the internal storage of the Glass-like wearable device into a desktop PC to perform further analysis. We did not change the size or aspect ratio of the images.
Processing and Analysis. Firstly, we locate the centroid of each image (xc, yc) = (1129, 928). We then locate the target signage section in the image, extract the centroid (xs, ys), and calculate the Euclidean distance between image centroid (xc, yc) = (1129, 928) and captured section centroid (xs, ys).
Results. We compile the results in Table 1. Table 1 shows a comprehensive comparison of how users targeting behaviour and accuracy are affected by the size and position of the signage sections. Section number 1, 2, 3, 8, and 10 represent larger signage section area. In these sections, users’ center targeting absolute deviations were relatively high. Section 6, 7, 9, and 11 represents horizontally wide sections. Notably for section number 6 and 7, absolute deviations were the highest of all sections. Interestingly, sections with small area such as 4, 5, and 9 have relatively lower absolute deviations.
Table 1. The absolute deviations (mean ± std) of gazing towards a target
Insights and Design Implications. We observe participants’ behavior during the signage section targeting study, as an addition to the assessment of participants’ accuracy when trying to align their perceptual FoV and Glass device’s camera AoV. We compile our insights as listed below:
-
1.
Participants tended to perform fine-adjustment to their framing when targeting at sections that have small area (e.g. section number 4, 5, 9). Therefore, we can conclude that users are more cautious during targeting (hence their overall absolute deviations are lower), when compared to targeting sections with larger area.
-
2.
Sections with larger area give users more instant confidence in targeting task. However, the absolute deviations are relatively high. Therefore, we need to incorporate more deviation permissive framing procedure to the GlassNage app, or any other system that relies on Glass-mounted camera capture.
-
3.
Sections with horizontally wide area are quite difficult for users to target, when we compare users targeting with section’s centroid. This is mainly due to spatial perception of users when framing such sections, where users are more likely to be satisfied with their framing although it’s not horizontally centered.
-
4.
Overall, using our Glass-like wearable device, we learned that more sophisticated alignment method is desirable to support users’ perceptual matching between their FoV and Glass device’s camera AoV. In current GlassNage implementation, we mitigate this issue with allowing the user to firstly capture an image, and then perform layout recognition. By doing so, we allow users to capture image that consists their section-of-interest, as well as other important landmark features.
4.4 Qualitative User Study
A qualitative user satisfaction study was conducted to test the usability of GlassNage.
We recruited the same participants as the previous Quantitative User Study.
Procedures. Each participant was provided with a Glass-like wearable device that was pre-installed with GlassNage application. S/he was then given a brief introduction followed by a set of instructions on how to use GlassNage. This was immediately followed by asking each participant to interact with the app and digital signage. The experimenter intervened when specific questions were asked, or when the instructions were misunderstood. After the exercise, the participants were asked questions related to the perceived workload from the NASA-TLX assessment [11].
In addition, “Was GlassNage hard to learn?” was always added into the questionnaire to gain insight on learning curve of GlassNage. This was followed by several general questions about GlassNage that are shown below:
-
1.
Did GlassNage make the content browsing experience more enjoyable?
-
2.
Did you feel that fetching content items through GlassNage is more effective than previously available methods?
-
3.
Do you have any additional comments?
All the questions above were rated using the Likert scale (1: strongly agree – 5: strongly disagree).
Results. Table 2 shows the participants’ rating on the perceived workload (NASA-TLX) of the user study.
Table 2. The ratings (mean ± std) of the NASA-TLX questions
The results from Table 2 show that participants felt positive while using GlassNage. The average rating for the subscale “mental demand”, “effort”, and “frustration” was the lowest at 3.61 ± 1.52, 3.24 ± 1.42, and 3.64 ± 1.25, respectively. On the other hand, the participants showed positive feelings that GlassNage was not difficult to learn (4.23 ± 1.52). Finally, the participants felt that overall GlassNage did perform well (1.21 ± 1.51).
The general questions suggested that subjects agree that GlassNage did make the content browsing experience more enjoyable (1.02 ± 0.24) and also felt that GlassNage is more effective (1.43 ± 0.84) than previous methods of content fetching.
Many comments were given regarding technical issues in the application such as:
-
1.
Implement finger pointing gesture recognition to select content rather than using touch panel.
-
2.
Implement faster signage section recognition framework.
-
3.
Include a function to “push” information to public signage
In addition to the user obtained feedbacks, we observed that some participants initially had difficulties to grab a frame capture of their desired section. This is coherent with the issue we raise in Quantitative User Study subsection. This motivates us to explore ways to mitigate this problem.