PsyGlass: Capitalizing on Google Glass for naturalistic data collection
- 876 Downloads
As commercial technology moves further into wearable technologies, cognitive and psychological scientists can capitalize on these devices to facilitate naturalistic research designs while still maintaining strong experimental control. One such wearable technology is Google Glass (Google, Inc.: www.google.com/glass), which can present wearers with audio and visual stimuli while tracking a host of multimodal data. In this article, we introduce PsyGlass, a framework for incorporating Google Glass into experimental work that is freely available for download and community improvement over time (www.github.com/a-paxton/PsyGlass). As a proof of concept, we use this framework to investigate dual-task pressures on naturalistic interaction. The preliminary study demonstrates how designs from classic experimental psychology may be integrated in naturalistic interactive designs with emerging technologies. We close with a series of recommendations for using PsyGlass and a discussion of how wearable technology more broadly may contribute to new or adapted naturalistic research designs.
KeywordsWearable computing Interaction dynamics Naturalistic methodology Google Glass
Cognitive and social scientists often efficiently leverage commercial technologies to enhance behavioral measurements in experimental paradigms. For example, the ubiquity of the personal computer permits easy computer-mouse tracking, allowing researchers to investigate the continuous dynamics of cognition and decision-making over time by charting mouse-movement trajectories during computer-based experiments (e.g., Freeman & Ambady, 2010; Huette & McMurray, 2010; Spivey & Dale, 2006). As video game consoles opened their platforms to developers, researchers have targeted the Nintendo Wii and Microsoft Kinect as opportunities for new behavioral tracking techniques. The Nintendo Wii became an extension of the mouse-tracking paradigm, allowing researchers to track free arm movements during choice selection (e.g., Dale, Roche, Snyder, & McCall, 2008; Duran, Dale, & McNamara, 2010), and the Microsoft Kinect provided highly affordable motion-tracking of overall body movements and specific effectors (e.g., Alexiadis et al., 2011; Clark et al., 2012; Oikonomidis, Kyriazis, & Argyros, 2011). Increasing computer availability and online presence has brought opportunities for worldwide data collection through services such as Amazon Mechanical Turk (e.g., Crump, McDonnell, & Gureckis, 2013; Paolacci, Chandler, & Ipeirotis, 2010). The recent explosion of open mobile application (“app”) development has provided researchers with the opportunity to integrate mobile phone technology into studies in and out of the lab (e.g., Gaggioli et al., 2013; Henze, Pielot, Poppinga, Schinke, & Boll, 2011; Miller, 2012; Raento, Oulasvirta, & Eagle, 2009). These are, naturally, just a handful of examples among many adaptations of technology for research purposes.
Through research-based apps, Google Glass can provide researchers with real-time control of even very subtle stimuli while unobtrusively tracking various behavioral measures. Glass can present wearers with visual stimuli on a small screen just over the right eye and with audio stimuli through a bone conduction transducer or proprietary earbuds. Wearers navigate Glass through voice command and with a small touchpad over the right temple. The device can capture high-resolution videos and photos, and researchers can track wearers’ head movements with on-board three-axis gyroscope and accelerometer sensors. Glass also includes on-board memory, wireless capabilities, and Google’s Android mobile operating system.1
Here, we first briefly review prior work that has used wearable technologies broadly and Glass specifically. We then introduce PsyGlass, our open-source platform for incorporating Glass into behavioral research that taps into some of these capabilities for naturalistic experimental work. As an example application for developing experimental paradigms with PsyGlass, we present a simple behavioral experiment that uses Glass both to present visual stimuli to participants and track participants’ movements during a naturalistic interaction task. We end with a list of recommendations for using PsyGlass, our goals for expanding its capabilities, and a brief discussion of how wearable technology can contribute to behavioral research.
Research opportunities for wearable technologies
Wearable technologies can give researchers the opportunity to track and quantify behavior in new ways. As technology has miniaturized while becoming more powerful, cognitive and social scientists have already begun looking for ways to incorporate it into research paradigms (e.g., Goodwin et al., 2008). Wearable technology is still a relatively underutilized methodology, but a growing number of researchers have adopted it in some behavioral and health-related domains. Although some of the capabilities provided by other wearable technologies may not be possible to implement with Glass, we here provide a brief history of wearable technology research, to establish wearables’ existing foundation in research and to spark ideas for the kinds of questions to which Glass (and PsyGlass) could be applied.
Previous research with wearable technology
Interest in wearable technology in research-related settings has existed for quite some time. However, until recent advances in developer-friendly commercial technology such as Google Glass, many researchers have had to engineer their own wearable solutions. For instance, affective researchers have been engineering wearable solutions to track and classify affect for nearly two decades (e.g., Lee & Kwon, 2010; Picard & Healey, 1997). Since then, wearable technology has spread to other domains—most notably, to the health sciences (e.g., Moens et al., 2014; Moens, van Noorden, & Leman, 2010; for a review, see Pantelopoulos & Bourbakis, 2008).
One of the most prominent examples of wearable technologies in the behavioral sciences to date has been the sociometric badge, developed to provide a host of metrics on individual and group behaviors (e.g., Lepri et al., 2012; Olguín Olguín, Gloor, & Pentland, 2009; Olguín Olguín, Waber, et al., 2009; Pentland, 2010; Waber et al., 2011). The sociometric badge has been applied most heavily in analyses of workplace behavior and interactions (e.g., for describing a research network in Lepri et al., 2012; or in a hospital in Olguín Olguín, Gloor, & Pentland, 2009), exploring connections between workplace activities and social factors in largely observational-style studies. For more on sociometric badges and related work, see the review articles by Olguín Olguín, Waber, and colleagues (2009) and Waber and colleagues (2011).
Existing work utilizing Google Glass
Over the past year, there has been growing excitement about applying Glass in research, although the majority of published scientific work to date comprises commentaries. To the authors’ knowledge, Glass has been featured in only one published experimental study in the behavioral sciences (Ishimaru et al., 2014). However, interest in Glass has surged in other research areas, especially the health sciences.
The health sciences are arguably one of the areas most interested in Glass, particularly as assistive tools. Recent commentaries have touted possible uses for Glass in laboratories (Chai et al., 2014; Parviz, 2014) or as assistive devices (Hernandez & Picard, 2014). From surgical assistance (Armstrong, Rankin, Giovinco, Mills, & Matsuoka, 2014) to dietary tracking (Mauerhoefer, Kawelke, Poliakov, Olivier, & Foster, 2014) to perceptions of health-related Glass use (McNaney et al., 2014), many preliminary integrations of Glass into the medical and health sciences have capitalized solely on existing Glass capabilities without additional app development. Only a handful of researchers have developed specialized apps with a variety of health science applications, such as facilitating food shopping (Wall, Ray, Pathak, & Lin, 2014), augmenting conversation for individuals with visual impairment (Anam, Alam, & Yeasin, 2014a, 2014b), and assisting biomedical technicians (Feng et al., 2014).
Other research areas have also begun to incorporate Glass, albeit to a lesser extent than in the health sciences. To the authors’ knowledge, only Ishimaru and colleagues (2014) have incorporated Glass into cognitive science,2 investigating how blink patterns and head movements can be used to categorize wearers’ everyday activities. In the domain of human–computer interaction, He, Chaparro, and Haskins (2014) have developed a Glass app called “USee” that can be used to facilitate usability testing, providing separate components for participants, researchers, and other observers.
Despite this rising interest, the programming requirements for developing Glass apps could pose a significant barrier to entry for many cognitive and social scientists. Our goal is to lower this barrier by providing a framework for incorporating Glass that can be adjusted to individual research needs. By opening the application to community development, we hope to promote the important ethos of shared resources and to encourage others to grow the application with us.
PsyGlass: A framework for Glass in behavioral research
Google Glass provides behavioral, cognitive, and social scientists with many methodological and measurement possibilities as a research tool. Glass can simultaneously present stimuli and track various behavioral metrics, all while remaining relatively unobtrusive, cost-effective, and portable. However, developing research apps for Glass currently requires researchers to develop projects entirely on their own. We believe that a centralized resource with functioning example code and guidance through the development process could make Glass more accessible to a wider scientific audience.
To that end, we have created PsyGlass, an open-source framework for incorporating Google Glass into cognitive and social science research. All code for the PsyGlass framework is freely available through GitHub (GitHub, Inc.: www.github.com), allowing the research community to use, expand, and refine the project. The code is jointly hosted by all three coauthors and can be found in the PsyGlass repository on GitHub (http://github.com/a-paxton/PsyGlass).
PsyGlass experimenter console
The console also manages the connection between the server and the Glass. The experimenter can use the console to open the initial server connection for the Glass. Once all Glass devices are connected, the experimenter can initiate the data collection session simultaneously across all devices to ensure time-locked data collection and stimulus presentation. The console provides the experimenter with updates about each server–Glass connection (e.g., latency) while the Glass devices are connected to the server. Once data collection is finished, the console allows the researcher to end the data collection session (again, simultaneously across all connected devices) and close the server connection for both Glass devices.
The PsyGlass Glassware allows the experimenter to update the visual display on the basis of stimuli sent from the experimenter console while recording three-dimensional accelerometer data. Once the server connection has been opened from the console, the wearer (or the experimenter) can initiate the server-to-Glass connection with the Glassware. After the console opens the data collection session, the Glassware regularly checks the server (by default at 4 Hz, or every 250 ms) to check for visual display updates issued from the console. Time-stamped x, y, z accelerometer sensor data are logged on a local text file every 4 ms (250 Hz, by default) until the data collection session has been ended.
After data collection has finished, the experimenter can upload the accelerometer data stored locally on the device to the server. Collecting and storing the data on the Glass helps prevent overheating of the device and preserves battery life, but data could be streamed continuously to the server with some changes to the PsyGlass framework. Data are saved to the server as a tab-delimited text file. To save space on the device, the previous session’s data are deleted locally once a new data collection session is initiated. More information on the Glassware workflow is included in the Appendix.
Potential applications for PsyGlass
Although our initial interest in Glass grew from our studies of bodily synchrony during face-to-face dyadic interaction (Paxton & Dale, 2013), it can be easily adapted for other settings. For example, researchers interested in humans’ exploration of their environment might track movement while providing visual cues on the display, whereas a study on language production might introduce distractors or incongruent lexical items on participants’ screens. In dyadic studies, researchers can use Glass to support naive confederate designs: A lexical cue or prearranged visual signal (e.g., color, shape) could serve as an instruction to lie to their partner during a conversation or to act confused while completing a map task. These are, of course, only a few brief examples, but they highlight one of the most compelling features of PsyGlass: targeted control over a participant’s stimuli on the fly, even in highly naturalistic settings.
To demonstrate how PsyGlass can be used to facilitate behavioral research, we present data below from an experiment investigating how individuals compensate for distraction during conversation.3 This preliminary study demonstrates how Glass may open opportunities for new experimental designs with distinct theoretical implications. We believe that Glass presents a unique opportunity for interpersonal behavioral research, given its commercial availability,4 relative affordability, and array of sensing capabilities. The experimental design, data collection procedures, and data analysis provide a concrete example of how PsyGlass can be deployed to extend theory-rich questions into new domains.
Example PsyGlass application: Convergence during interaction
Interpersonal convergence or synchrony broadly describes how individuals become increasingly similar over time while they interact (e.g., Shockley, Richardson, & Dale, 2009). Previous research suggests that one benefit of convergence may be to help individuals overcome impoverished communication signals. For instance, individuals’ head movements synchronize more strongly during conversation with high ambient noise, as compared with conversation in an otherwise silent room (Boker, Rotondo, Xu, & King, 2002). These findings support the idea that interpersonal convergence may be vital to comprehension (e.g., Richardson & Dale, 2005; Shockley et al., 2009), perhaps by serving as a scaffold to support key aspects of the interaction in a synergistic account of interpersonal coordination (e.g., Dale, Fusaroli, Duran, & Richardson, 2014; Fusaroli et al., 2012; Riley, Richardson, Shockley, & Ramenzoni, 2011).
Building from Boker and colleagues’ (2002) findings in the auditory domain, in the present study we tested whether low-level visual distractors—analogues to auditory distractors—increase interpersonal movement synchrony during friendly conversations. We compared participants’ head movements during conversation (a) combined with a dual-task paradigm and (b) in the presence of “visual noise.” Using PsyGlass, we were able to present visual stimuli separately to each participant while surreptitiously collecting high-resolution head movement data. We anticipated that dyads would synchronize more during the “noise” condition (cf. the auditory noise in Boker et al., 2002). We chose the dual-task condition as a comparison condition that could decrease interpersonal synchrony, given a constellation of previous findings (e.g., regarding working memory and synchrony in Miles, Nind, & Macrae, 2009; and working memory and dual-task paradigms in Phillips, Tunstall, & Channon, 2007).
Setting up PsyGlass
In return for course credit, 30 undergraduate students from the University of California, Merced, participated as 15 volunteer dyads, none of whom reported knowing one another. Each dyad was randomly assigned to either the noise (n = 7) or the dual-task (n = 8) condition. Due to connectivity issues, one dyad’s data (from the noise condition) were removed from the present analyses, since fewer than 3 min of usable movement data were recorded. (See the notes about connectivity issues in the General Discussion.)
Materials and procedure
After completing several questionnaires (not analyzed here), the participants were seated facing one another in two stationary chairs approximately 3 feet 2 in. away from one another in a semi-enclosed space within a private room. Both chairs were seated in profile to a small table with an iMac 27-in. (Apple, Inc.) computer several yards away, from which the experimenter would run the PsyGlass experimenter console in the following experiment. Participants were then given 3 min to get acquainted without the experimenter present.
Once the experimenter returned, each participant was given a Google Glass with the PsyGlass Glassware and went through a brief setup process to become familiar with the device. The experimenter first described the Glass to the participants (i.e., explaining what the display and touchscreen were) and helped the participants properly fit the Glass to their faces. The experimenter then verbally guided participants through initializing the PsyGlass Glassware, providing the participants some experience with the device before beginning the experiment. The experimenter tested participants’ ability to fully see the Glass by ostensibly checking its connection, using the PsyGlass experimenter console to present participants with either one word (i.e., “Glass” or “test”) or color (i.e., red code #FF0000 or blue code #0000FF) and asking them to report what change they saw on their screen.
Crucially, all dyads were then told that their Glass display would switch between blue and red during the experiment. To implement this, we created a version of the PsyGlass experimenter console that updated the screen color once per second (1 Hz), with a .9 probability of a blue screen and a .1 probability of a red screen.6 Dyads assigned to the dual-task condition were told to remember each time the screen turned red and that they would be asked to write down that number at the end of the conversation. This condition is akin to a dual-task oddball paradigm (Squires, Squires, & Hillyard, 1975). Dyads assigned to the noise condition were told that these switching colors were due to a bug in the programming and that they could ignore the changing screen during their conversation.
All dyads were then asked to hold an 8-min conversation with one another about popular media and entertainment (mean length = 8.12 min). After the remainder of the experiment,7 participants were thanked and debriefed.
Data were trimmed to exclude the calibration and instruction periods, retaining only the conversational data. The mean length of recorded movement data was 7.7 min (range = 4.17–8.86 min), largely due to connectivity errors in two of the included dyads. We converted the x, y, z accelerometer data for each participant into Euclidean distances to create a single metric of head movement over time, and then applied a second-order Butterworth filter to smooth the data. Cross-correlation coefficients (r) served as our metric of interpersonal synchrony, since they have been a fairly common metric for synchrony in previous research (e.g., Richardson, Dale, & Tomlinson, 2009). Cross-correlation provides a measure of the influence between individuals across windows of time: By correlating individuals’ time series at varying lags, we could measure the degree to which individuals were affecting one another more broadly. Following from previous research (Ramseyer & Tschacher, 2014), we calculated cross-correlation rs within a ±2,000-ms window.
The data were analyzed primarily using a linear mixed-effects model. The random-effects structure (using random slopes and intercepts) was kept as maximal as possible (Baayen, Davidson, & Bates, 2008; Barr, Levy, Scheepers, & Tily, 2013). Dyad membership was included as the sole random effect. The condition was dummy-coded prior to inclusion (0 = noise, 1 = dual-task). All other variables—including interaction terms—were centered and standardized (Baayen et al., 2008) prior to being entered into the model.
This model served two purposes: (a) to replicate previous findings of time-locked synchrony of head movements during conversation (Ramseyer & Tschacher, 2014) and (b) to explore whether low-level visual distractors would negatively impact that synchrony relative to increased working memory load. The model predicted r—our measure of interpersonal synchrony or convergence—with lag (±2,000 ms) and condition (dual-task = 1) as independent variables.
As anticipated, increases in lag significantly predicted decreases in r, providing evidence for in-phase interpersonal synchrony of head movements during conversation (ß = –.50, p < .0001). The main effect of lag indicated that partners’ head movements were most strongly correlated at lag 0—that is, in moment-to-moment comparisons. The correlation decreased as the time series were compared at increasingly disparate points.
In the present study, we explored how interpersonal dynamics during naturalistic conversation are affected by environmental factors. Inspired by previous work in the auditory domain (Boker et al., 2002), we investigated how visual distractors and increased working memory load differentially affect interpersonal synchrony by using PsyGlass to quantify head movements. Although we replicated previous findings of head movement synchrony generally (Ramseyer & Tschacher, 2014), we found conflicting evidence for the impact of these conditions on synchrony.
Although the longer-range convergence was not significantly different between the two conditions, moment-to-moment (i.e., in-phase) synchrony was marginally higher in the dual-task condition, contrary to our expectations. These unexpected results could have several implications for this literature, to be disentangled with follow-up work. First, the results could suggest that—although higher working memory load may increase lag-0 synchrony—convergence unfolds similarly over a longer timescale, regardless of the nature of the external visual stimuli.
Second, these findings could suggest a reframing of the conditions in the present study as compared with those used by Boker and colleagues (2002). Rather than interpreting the auditory noise as a distractor, it might in fact have been more similar to the dual-task condition than the visual-noise condition: Both ambient noise and the dual task may be more task-relevant and less easily ignored during conversation than the irregular blue-to-red screen switches. Perhaps the key element is that distractors should in some way be unavoidable during interaction.
Using PsyGlass: Recommendations and limitations
Below we compile a number of recommendations and limitations to consider when using Google Glass with PsyGlass. These items to consider should be useful for practical concerns about experimental design and data analysis with PsyGlass.
Troubleshooting modifications to PsyGlass can take time, especially for those new to Android and Glass development. Those new to Android coding should first familiarize themselves with the basic PsyGlass program and start with incremental changes to the code, building to larger extensions. Numerous developer resources for Android and Glass are available through third-party sources (e.g., programming forums, tutorial websites) and Google Developers (Google, Inc.: https://developers.google.com/).
In its current form, PsyGlass is very battery-intensive. Researchers may consider reducing the computational strain (e.g., by reducing the sampling rate) if using the application for extended periods of time, to preserve battery life. In our example experiment, PsyGlass actively ran for a maximum of 20 min per data collection session.8 By charging the Glass devices for up to 20 min between data collection sessions, we were able to run up to four back-to-back data collection sessions without battery problems. We imagine that this pattern could continue for longer but cannot say so from experience.
The on-board computer for Glass (which sits alongside the wearer’s right temple) can become quite warm to the touch after extended intensive use or charging. Although a very small number of participants commented on this warmth, no participants reported it as being uncomfortable, even when the Glass had been in use or charging for up to 3 h before their data collection session.
Because the Glass display does not have an opaque backing, nearby parties may be able to see some of the stimuli presented on the Glass display. Bright colors are the most easily noticeable, being recognizable from farther away than 45 feet.9 Although the presence of most text or shapes is perceivable from approximately 90 in., large text and shapes are somewhat identifiable as close as 21 in. and are distinctly readable by around 14 in. away. Small text, however, is unreadable even at 6 in. Researchers should take this into account and perform preliminary tests to ensure that it will not impinge on the experimental goals (e.g., during deception-based tasks). However, we have heard reports of others attaching lightweight backings to the Glass, which may serve as a solution in these cases.
Although Google Glass is designed to be worn over regular glasses, it can be somewhat difficult for some wearers to comfortably wear Glass while being able to easily see the entire screen. In some cases—like our color-based example study—being able to see most of the screen clearly should suffice. However, this may be an issue for experimental designs relying on text-based prompts or stimuli. Researchers may consider altering their experimental design or restricting participant eligibility in such cases.
Many participants will likely have had little to no prior experience using Google Glass. Anecdotally, many of our participants commented on how “exciting” or “weird” Glass was. We recommend that researchers at least briefly introduce Glass to participants before beginning the experiment. An introduction to Glass minimizes participants’ awkwardness with the device and reduces the chance that participants will interfere with key Glass capabilities during the experiment (e.g., by brushing the touchpad). Researchers may use our protocol—reported in the Materials and Procedure section—as a guide.
The framework is currently designed to protect data transfer between the server and connected Glass devices. Therefore, problems with wireless Internet connections can cause PsyGlass to terminate the data collection session or disconnect the Glass from the server entirely. All data prior to termination are still saved locally on the device. By prioritizing connectivity, PsyGlass is able to ensure that all commands are executed as intended, but this may be an issue for individuals who have unreliable or difficult wireless networks. This can currently be changed by reprogramming PsyGlass, and we hope to release an alternate version that is more forgiving in this area.
Researchers may consider applying down-sampling procedures, band-pass filters, or moving averages for their data analysis, depending on project needs and the standard practices of relevant target research area(s). The high-resolution movement data provide high statistical power for time series analyses, but this power may not always be needed. An example of data manipulation and filtering has been provided above in the Analyses section.
Wearable technology can provide researchers with opportunities to explore naturalistic behavior dynamics both in and out of the lab. PsyGlass capitalizes on Google Glass to give researchers a stimulus presentation and data collection tool in an easily customizable, open-source framework. We have provided an example application of PsyGlass to dyadic interaction research, but the paradigm is open to single- and multiparticipant studies. We welcome other researchers to join us in using or expanding PsyGlass on GitHub (www.github.com/a-paxton/PsyGlass). The openness of the Google Glass developer community stands as a resource for researchers interested in tapping into other dimensions of the Google Glass, from audio stimulus presentation to eye-camera recording.
Update regarding purchasing Google Glass
Google has recently shifted the Glass program to focus more on developers and enterprise needs through its “Glass at Work” program (https://developers.google.com/glass/distribute/glass-at-work). At the time of writing, those interested in purchasing Glass for research or educational needs may contact the Glass at Work program at email@example.com. Any changes or additional relevant information will be included on the readme file at the PsyGlass repository (http://github.com/a-paxton/PsyGlass).
Future directions for PsyGlass and wearable technology
Wearable solutions like PsyGlass and other tools (e.g., Olguín Olguín et al., 2009a) are helping researchers increase external validity and target the real-world behaviors that they are interested in exploring. Especially for complex behaviors like interaction, researchers must balance experimental controls with experimental designs targeting naturalistic behaviors. By providing wireless, portable, minimalistic behavior tracking, wearable technology can unobtrusively quantify behavioral metrics and give moment-to-moment control over stimulus presentation. These represent an addition to our tools for creating naturalistic, externally valid experiments that tap into the real-world behaviors we seek to capture. With PsyGlass, we hope to lower the barriers to entry for other researchers who are interested in capitalizing on these new opportunities.
In that vein, we intend to continue to expand PsyGlass as a methodological tool that can contribute to theoretical inquiry. Our basic goals include tapping additional Glass capabilities for data collection (e.g., gyroscope, eye-camera capture) and stimulus presentation (e.g., audio) to give researchers more experimental design and multimodal options. We have already created optional modules to implement lexical decision tasks on PsyGlass, available on GitHub. We hope to provide a suite of collection and presentation options that others can use to cobble together versions of PsyGlass that fit their needs. Our first goal for major expansion is to create a way for partners’ Glass devices to be interactively updated by each another—for instance, by having the amplitude of movement of one Glass (measured by the accelerometer) update the visual stimuli of a second, connected Glass. In doing so, PsyGlass can subtly prompt interaction dynamics and alter interpersonal behaviors on the basis of prespecified events. By putting the code onto an open community for programmers, we hope to encourage others to join us in our expansion and refinement of the PsyGlass tool.
This information is current as of December 2014 and describes the Glass Explorer model (version 2). Detailed specifications are freely available through Google Developer’s Glass resources (http://developers.google.com/glass).
Some of the cited works from the health sciences have had behavioral components, but such works are primarily focused on health and/or medical applications.
These data are part of a larger ongoing research project investigating how interaction is affected by various contextual pressures.
The protocol for purchasing Google Glass has changed. Further information is provided in the General Discussion.
For a quick demonstration, see https://developers.google.com/glass/develop/gdk/quick-start.
If the updated color was the same as the current color, the screen did not appear to change or flicker.
Which included subsequent conditions not analyzed here, beyond the scope of the current demonstration.
Due to additional conditions outside of the scope of the present article.
Measured from the nose of wearer to the nose of viewer in a well-lit room, with the viewer having normal, uncorrected vision.
We thank UC Merced undergraduate research assistants Keith Willson, Krina Patel, and Kyle Carey for their assistance in data collection for the example PsyGlass application.
- Alexiadis, D. S., Kelly, P., Daras, P., O’Connor, N. E., Boubekeur, T., & Ben Moussa, M. (2011). Evaluating a dancer’s performance using Kinect-based skeleton tracking. In Proceedings of the 19th ACM International Conference on Multimedia (pp. 659–662). New York, NY: ACM Press.CrossRefGoogle Scholar
- Anam, A. I., Alam, S., & Yeasin, M. (2014a). Expression: A dyadic conversation aid using Google Glass for people with visual impairments. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct (pp. 211–214). New York, NY: ACM Press.Google Scholar
- Anam, A. I., Alam, S., & Yeasin, M. (2014b). Expression: A Google Glass based assistive solution for social signal processing. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (pp. 295–296). New York, NY: ACM Press. doi: 10.1145/2661334.2661348 Google Scholar
- Dale, R., Fusaroli, R., Duran, N. D., & Richardson, D. C. (2014). The self-organization of human interaction. In B. H. Ross (Ed.), The psychology of learning and motivation (Vol. 59, pp. 43–95). San Diego, CA: Elsevier Academic Press.Google Scholar
- Hernandez, J., & Picard, R. W. (2014). SenseGlass: Using Google Glass to sense daily emotions. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (pp. 77–78). New York, NY: ACM Press.Google Scholar
- Ishimaru, S., Kunze, K., Kise, K., Weppner, J., Dengel, A., Lukowicz, P., & Bulling, A. (2014). In the blink of an eye: Combining head motion and eye blink frequency for activity recognition with Google Glass. In Proceedings of the 5th Augmented Human International Conference (pp. 1–4). New York, NY: ACM Press.CrossRefGoogle Scholar
- Lee, H., & Kwon, J. (2010). Combining context-awareness with wearable computing for emotion-based contents service. International Journal of Advanced Science and Technology, 22, 13–24.Google Scholar
- Lepri, B., Staiano, J., Rigato, G., Kalimeri, K., Finnerty, A., Pianesi, F., … & Pentland, A. (2012). The sociometric badges corpus: A multilevel behavioral dataset for social behavior in complex organizations. In Proceedings of the 2012 International Conference on Privacy, Security, Risk and Trust (PASSAT) and the 2012 International Conference on Social Computing (SocialCom) (pp. 623–628). Piscataway, NJ: IEEE Press. Google Scholar
- Mauerhoefer, L., Kawelke, P., Poliakov, I., Olivier, P., & Foster, E. (2014). An exploration of the feasibility of using Google Glass for dietary assessment (No. CS-TR-1419) (pp. 1–10). Newcastle upon Tyne, UK: Newcastle University.Google Scholar
- McNaney, R., Vines, J., Roggen, D., Balaam, M., Zhang, P., Poliakov, I., & Olivier, P. (2014). Exploring the acceptability of Google Glass as an everyday assistive device for people with Parkinson’s. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2551–2554). New York, NY: ACM Press.Google Scholar
- Moens, B., Muller, C., van Noorden, L., Franěk, M., Celie, B., Boone, J., & Leman, M. (2014). Encouraging spontaneous synchronisation with D-Jogger, an adaptive music player that aligns movement and music. PLoS ONE, 9, e114234. doi: 10.1371/journal.pone.0114234 PubMedCentralCrossRefPubMedGoogle Scholar
- Moens, B., van Noorden, L., & Leman, M. (2010). D-jogger: Syncing music with walking. In Proceedings of the 7th Sound and Music Computing Conference (pp. 451–456). New York, NY: ACM Press.Google Scholar
- Oikonomidis, I., Kyriazis, N., & Argyros, A. A. (2011). Efficient model-based 3D tracking of hand articulations using Kinect. In J. Hoey, S. McKenna, & E. Trucco (Eds.), Proceedings of the British machine vision conference (pp. 101.1–101.11). Durham, UK: BMVA Press.Google Scholar
- Olguín Olguín, D., Gloor, P. A., & Pentland, A. (2009a). Capturing individual and group behavior with wearable sensors. In T. Choundhury, A. Kapoor, & H. Kautz (Eds.), Papers from the AAAI spring symposium on human behavior modeling (pp. 68–74). Menlo Park, CA: AAAI Press.Google Scholar
- Pantelopoulos, A., & Bourbakis, N. (2008). A survey on wearable biosensor systems for health monitoring. In Proceedings of the 30th annual international engineering in medicine and biology society conference (pp. 4887–4890). Piscataway, NJ: IEEE Press.Google Scholar
- Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5, 411–419.Google Scholar
- Pentland, A. S. (2010). Honest signals. Cambridge, MA: MIT Press.Google Scholar
- Starner, T., Mann, S., Rhodes, B., Levine, J., Healey, J., Kirsch, D., & Pentland, A. (1997). Augmented reality through wearable computing. Presence: Teleoperators and Virtual Environments, 6, 386–398.Google Scholar
- Waber, B. N., Aral, S., Olguín Olguín, D., Wu, L., Brynjolfsson, E., & Pentland, A. (2011). Sociometric badges: A new tool for IS research. Social Science Research Network, 1789103.Google Scholar