Cognitive and social scientists often efficiently leverage commercial technologies to enhance behavioral measurements in experimental paradigms. For example, the ubiquity of the personal computer permits easy computer-mouse tracking, allowing researchers to investigate the continuous dynamics of cognition and decision-making over time by charting mouse-movement trajectories during computer-based experiments (e.g., Freeman & Ambady, 2010; Huette & McMurray, 2010; Spivey & Dale, 2006). As video game consoles opened their platforms to developers, researchers have targeted the Nintendo Wii and Microsoft Kinect as opportunities for new behavioral tracking techniques. The Nintendo Wii became an extension of the mouse-tracking paradigm, allowing researchers to track free arm movements during choice selection (e.g., Dale, Roche, Snyder, & McCall, 2008; Duran, Dale, & McNamara, 2010), and the Microsoft Kinect provided highly affordable motion-tracking of overall body movements and specific effectors (e.g., Alexiadis et al., 2011; Clark et al., 2012; Oikonomidis, Kyriazis, & Argyros, 2011). Increasing computer availability and online presence has brought opportunities for worldwide data collection through services such as Amazon Mechanical Turk (e.g., Crump, McDonnell, & Gureckis, 2013; Paolacci, Chandler, & Ipeirotis, 2010). The recent explosion of open mobile application (“app”) development has provided researchers with the opportunity to integrate mobile phone technology into studies in and out of the lab (e.g., Gaggioli et al., 2013; Henze, Pielot, Poppinga, Schinke, & Boll, 2011; Miller, 2012; Raento, Oulasvirta, & Eagle, 2009). These are, naturally, just a handful of examples among many adaptations of technology for research purposes.

Over the past decade, a new breed of technology has emerged and is poised to generate new experimental and methodological explorations. Numerous segments of the technology industry have moved into wearable technologies as a new avenue for products and services. From smart watches to fitness trackers, these devices offer a range of services with a variety of applications and intended audiences that can be integrated into behavioral applications (e.g., Goodwin, Velicer, & Intille, 2008; Klonoff, 2014; Picard & Healey, 1997; Starner et al., 1997). One well-known wearable technology is Google Glass (Google, Inc.), a multipurpose device worn on the face like glasses (see Fig. 1). Its range of functionalities and its openness to developers make it a potentially powerful tool for cognitive and social science research, both in and out of the lab.

Fig. 1
figure 1

Photo of Google Glass (Google, Inc.: www.google.com/glass)

Through research-based apps, Google Glass can provide researchers with real-time control of even very subtle stimuli while unobtrusively tracking various behavioral measures. Glass can present wearers with visual stimuli on a small screen just over the right eye and with audio stimuli through a bone conduction transducer or proprietary earbuds. Wearers navigate Glass through voice command and with a small touchpad over the right temple. The device can capture high-resolution videos and photos, and researchers can track wearers’ head movements with on-board three-axis gyroscope and accelerometer sensors. Glass also includes on-board memory, wireless capabilities, and Google’s Android mobile operating system.Footnote 1

Here, we first briefly review prior work that has used wearable technologies broadly and Glass specifically. We then introduce PsyGlass, our open-source platform for incorporating Glass into behavioral research that taps into some of these capabilities for naturalistic experimental work. As an example application for developing experimental paradigms with PsyGlass, we present a simple behavioral experiment that uses Glass both to present visual stimuli to participants and track participants’ movements during a naturalistic interaction task. We end with a list of recommendations for using PsyGlass, our goals for expanding its capabilities, and a brief discussion of how wearable technology can contribute to behavioral research.

Research opportunities for wearable technologies

Wearable technologies can give researchers the opportunity to track and quantify behavior in new ways. As technology has miniaturized while becoming more powerful, cognitive and social scientists have already begun looking for ways to incorporate it into research paradigms (e.g., Goodwin et al., 2008). Wearable technology is still a relatively underutilized methodology, but a growing number of researchers have adopted it in some behavioral and health-related domains. Although some of the capabilities provided by other wearable technologies may not be possible to implement with Glass, we here provide a brief history of wearable technology research, to establish wearables’ existing foundation in research and to spark ideas for the kinds of questions to which Glass (and PsyGlass) could be applied.

Previous research with wearable technology

Interest in wearable technology in research-related settings has existed for quite some time. However, until recent advances in developer-friendly commercial technology such as Google Glass, many researchers have had to engineer their own wearable solutions. For instance, affective researchers have been engineering wearable solutions to track and classify affect for nearly two decades (e.g., Lee & Kwon, 2010; Picard & Healey, 1997). Since then, wearable technology has spread to other domains—most notably, to the health sciences (e.g., Moens et al., 2014; Moens, van Noorden, & Leman, 2010; for a review, see Pantelopoulos & Bourbakis, 2008).

One of the most prominent examples of wearable technologies in the behavioral sciences to date has been the sociometric badge, developed to provide a host of metrics on individual and group behaviors (e.g., Lepri et al., 2012; Olguín Olguín, Gloor, & Pentland, 2009; Olguín Olguín, Waber, et al., 2009; Pentland, 2010; Waber et al., 2011). The sociometric badge has been applied most heavily in analyses of workplace behavior and interactions (e.g., for describing a research network in Lepri et al., 2012; or in a hospital in Olguín Olguín, Gloor, & Pentland, 2009), exploring connections between workplace activities and social factors in largely observational-style studies. For more on sociometric badges and related work, see the review articles by Olguín Olguín, Waber, and colleagues (2009) and Waber and colleagues (2011).

Existing work utilizing Google Glass

Over the past year, there has been growing excitement about applying Glass in research, although the majority of published scientific work to date comprises commentaries. To the authors’ knowledge, Glass has been featured in only one published experimental study in the behavioral sciences (Ishimaru et al., 2014). However, interest in Glass has surged in other research areas, especially the health sciences.

The health sciences are arguably one of the areas most interested in Glass, particularly as assistive tools. Recent commentaries have touted possible uses for Glass in laboratories (Chai et al., 2014; Parviz, 2014) or as assistive devices (Hernandez & Picard, 2014). From surgical assistance (Armstrong, Rankin, Giovinco, Mills, & Matsuoka, 2014) to dietary tracking (Mauerhoefer, Kawelke, Poliakov, Olivier, & Foster, 2014) to perceptions of health-related Glass use (McNaney et al., 2014), many preliminary integrations of Glass into the medical and health sciences have capitalized solely on existing Glass capabilities without additional app development. Only a handful of researchers have developed specialized apps with a variety of health science applications, such as facilitating food shopping (Wall, Ray, Pathak, & Lin, 2014), augmenting conversation for individuals with visual impairment (Anam, Alam, & Yeasin, 2014a, 2014b), and assisting biomedical technicians (Feng et al., 2014).

Other research areas have also begun to incorporate Glass, albeit to a lesser extent than in the health sciences. To the authors’ knowledge, only Ishimaru and colleagues (2014) have incorporated Glass into cognitive science,Footnote 2 investigating how blink patterns and head movements can be used to categorize wearers’ everyday activities. In the domain of human–computer interaction, He, Chaparro, and Haskins (2014) have developed a Glass app called “USee” that can be used to facilitate usability testing, providing separate components for participants, researchers, and other observers.

Despite this rising interest, the programming requirements for developing Glass apps could pose a significant barrier to entry for many cognitive and social scientists. Our goal is to lower this barrier by providing a framework for incorporating Glass that can be adjusted to individual research needs. By opening the application to community development, we hope to promote the important ethos of shared resources and to encourage others to grow the application with us.

PsyGlass: A framework for Glass in behavioral research

Google Glass provides behavioral, cognitive, and social scientists with many methodological and measurement possibilities as a research tool. Glass can simultaneously present stimuli and track various behavioral metrics, all while remaining relatively unobtrusive, cost-effective, and portable. However, developing research apps for Glass currently requires researchers to develop projects entirely on their own. We believe that a centralized resource with functioning example code and guidance through the development process could make Glass more accessible to a wider scientific audience.

To that end, we have created PsyGlass, an open-source framework for incorporating Google Glass into cognitive and social science research. All code for the PsyGlass framework is freely available through GitHub (GitHub, Inc.: www.github.com), allowing the research community to use, expand, and refine the project. The code is jointly hosted by all three coauthors and can be found in the PsyGlass repository on GitHub (http://github.com/a-paxton/PsyGlass).

PsyGlass facilitates data collection and moment-to-moment experimenter control over stimuli on connected Glass devices. Currently, PsyGlass supports single-participant or dyadic research, although it can be adapted to include additional participants. The framework (see Fig. 2) includes a Web-based experimenter console and specially designed Glassware (i.e., a Glass app) built using Android Studio (Google, Inc.; http://developer.android.com/sdk/). PsyGlass currently presents only visual data and collects only accelerometer data, although we are working to expand data collection and stimulus presentation to other modalities, as well (see the Future Directions section).

Fig. 2
figure 2

PsyGlass framework flow and the programming and/or markup languages of each component (listed in parentheses). In the experimenter console’s current form, the researcher can use it to update visual displays on one or more connected Glass devices while collecting accelerometer data from each

PsyGlass experimenter console

The experimenter console is a streamlined Web interface that allows the experimenter to manipulate connected Glass visual displays (see Fig. 3). The console provides separate controls for up to two Glass devices, allowing the experimenter to update text and the background color displayed to each. With relatively basic JavaScript capabilities, experimenters may modify the console as desired to provide more automated solutions for one or more connected devices (e.g., presenting colors or words from a list at random).

Fig. 3
figure 3

PsyGlass experimenter console. From here, the experimenter can manage the connection between the connected Google Glass device(s) and the server, initiate data collection sessions, and update the Glass screen(s) with text and/or color

The console also manages the connection between the server and the Glass. The experimenter can use the console to open the initial server connection for the Glass. Once all Glass devices are connected, the experimenter can initiate the data collection session simultaneously across all devices to ensure time-locked data collection and stimulus presentation. The console provides the experimenter with updates about each server–Glass connection (e.g., latency) while the Glass devices are connected to the server. Once data collection is finished, the console allows the researcher to end the data collection session (again, simultaneously across all connected devices) and close the server connection for both Glass devices.

PsyGlass Glassware

The PsyGlass Glassware allows the experimenter to update the visual display on the basis of stimuli sent from the experimenter console while recording three-dimensional accelerometer data. Once the server connection has been opened from the console, the wearer (or the experimenter) can initiate the server-to-Glass connection with the Glassware. After the console opens the data collection session, the Glassware regularly checks the server (by default at 4 Hz, or every 250 ms) to check for visual display updates issued from the console. Time-stamped x, y, z accelerometer sensor data are logged on a local text file every 4 ms (250 Hz, by default) until the data collection session has been ended.

After data collection has finished, the experimenter can upload the accelerometer data stored locally on the device to the server. Collecting and storing the data on the Glass helps prevent overheating of the device and preserves battery life, but data could be streamed continuously to the server with some changes to the PsyGlass framework. Data are saved to the server as a tab-delimited text file. To save space on the device, the previous session’s data are deleted locally once a new data collection session is initiated. More information on the Glassware workflow is included in the Appendix.

Potential applications for PsyGlass

Although our initial interest in Glass grew from our studies of bodily synchrony during face-to-face dyadic interaction (Paxton & Dale, 2013), it can be easily adapted for other settings. For example, researchers interested in humans’ exploration of their environment might track movement while providing visual cues on the display, whereas a study on language production might introduce distractors or incongruent lexical items on participants’ screens. In dyadic studies, researchers can use Glass to support naive confederate designs: A lexical cue or prearranged visual signal (e.g., color, shape) could serve as an instruction to lie to their partner during a conversation or to act confused while completing a map task. These are, of course, only a few brief examples, but they highlight one of the most compelling features of PsyGlass: targeted control over a participant’s stimuli on the fly, even in highly naturalistic settings.

To demonstrate how PsyGlass can be used to facilitate behavioral research, we present data below from an experiment investigating how individuals compensate for distraction during conversation.Footnote 3 This preliminary study demonstrates how Glass may open opportunities for new experimental designs with distinct theoretical implications. We believe that Glass presents a unique opportunity for interpersonal behavioral research, given its commercial availability,Footnote 4 relative affordability, and array of sensing capabilities. The experimental design, data collection procedures, and data analysis provide a concrete example of how PsyGlass can be deployed to extend theory-rich questions into new domains.

Example PsyGlass application: Convergence during interaction

Interpersonal convergence or synchrony broadly describes how individuals become increasingly similar over time while they interact (e.g., Shockley, Richardson, & Dale, 2009). Previous research suggests that one benefit of convergence may be to help individuals overcome impoverished communication signals. For instance, individuals’ head movements synchronize more strongly during conversation with high ambient noise, as compared with conversation in an otherwise silent room (Boker, Rotondo, Xu, & King, 2002). These findings support the idea that interpersonal convergence may be vital to comprehension (e.g., Richardson & Dale, 2005; Shockley et al., 2009), perhaps by serving as a scaffold to support key aspects of the interaction in a synergistic account of interpersonal coordination (e.g., Dale, Fusaroli, Duran, & Richardson, 2014; Fusaroli et al., 2012; Riley, Richardson, Shockley, & Ramenzoni, 2011).

Building from Boker and colleagues’ (2002) findings in the auditory domain, in the present study we tested whether low-level visual distractors—analogues to auditory distractors—increase interpersonal movement synchrony during friendly conversations. We compared participants’ head movements during conversation (a) combined with a dual-task paradigm and (b) in the presence of “visual noise.” Using PsyGlass, we were able to present visual stimuli separately to each participant while surreptitiously collecting high-resolution head movement data. We anticipated that dyads would synchronize more during the “noise” condition (cf. the auditory noise in Boker et al., 2002). We chose the dual-task condition as a comparison condition that could decrease interpersonal synchrony, given a constellation of previous findings (e.g., regarding working memory and synchrony in Miles, Nind, & Macrae, 2009; and working memory and dual-task paradigms in Phillips, Tunstall, & Channon, 2007).

Method

Setting up PsyGlass

Once our experiment was designed, we took a series of steps to set up the technical foundation for PsyGlass. As a dyadic interaction study, we prepared two Glass devices, one for each participant. First, the native Java code for PsyGlass must be compiled onto the Glass devices. The Java code distributed on GitHub (linked above) can be compiled in the Glass software development kit environment (called the “GDK”); Google’s documentation for this process is quite thorough.Footnote 5 Second, to accompany PsyGlass on the Glass devices, we developed JavaScript code that controls the PsyGlass experimenter console. This JavaScript code (also included on GitHub) controls the nature and timing of the stimuli (described below). Third, we installed the PHP code on a server that coordinates data collection through the experimenter’s browser in order to share the Glass devices’ data with the server. Importantly, this setup requires that the experimenter’s computer and the two Glass devices be connected to the Internet during the entire experiment.

Participants

In return for course credit, 30 undergraduate students from the University of California, Merced, participated as 15 volunteer dyads, none of whom reported knowing one another. Each dyad was randomly assigned to either the noise (n = 7) or the dual-task (n = 8) condition. Due to connectivity issues, one dyad’s data (from the noise condition) were removed from the present analyses, since fewer than 3 min of usable movement data were recorded. (See the notes about connectivity issues in the General Discussion.)

Materials and procedure

After completing several questionnaires (not analyzed here), the participants were seated facing one another in two stationary chairs approximately 3 feet 2 in. away from one another in a semi-enclosed space within a private room. Both chairs were seated in profile to a small table with an iMac 27-in. (Apple, Inc.) computer several yards away, from which the experimenter would run the PsyGlass experimenter console in the following experiment. Participants were then given 3 min to get acquainted without the experimenter present.

Once the experimenter returned, each participant was given a Google Glass with the PsyGlass Glassware and went through a brief setup process to become familiar with the device. The experimenter first described the Glass to the participants (i.e., explaining what the display and touchscreen were) and helped the participants properly fit the Glass to their faces. The experimenter then verbally guided participants through initializing the PsyGlass Glassware, providing the participants some experience with the device before beginning the experiment. The experimenter tested participants’ ability to fully see the Glass by ostensibly checking its connection, using the PsyGlass experimenter console to present participants with either one word (i.e., “Glass” or “test”) or color (i.e., red code #FF0000 or blue code #0000FF) and asking them to report what change they saw on their screen.

Crucially, all dyads were then told that their Glass display would switch between blue and red during the experiment. To implement this, we created a version of the PsyGlass experimenter console that updated the screen color once per second (1 Hz), with a .9 probability of a blue screen and a .1 probability of a red screen.Footnote 6 Dyads assigned to the dual-task condition were told to remember each time the screen turned red and that they would be asked to write down that number at the end of the conversation. This condition is akin to a dual-task oddball paradigm (Squires, Squires, & Hillyard, 1975). Dyads assigned to the noise condition were told that these switching colors were due to a bug in the programming and that they could ignore the changing screen during their conversation.

All dyads were then asked to hold an 8-min conversation with one another about popular media and entertainment (mean length = 8.12 min). After the remainder of the experiment,Footnote 7 participants were thanked and debriefed.

Analyses

Data were trimmed to exclude the calibration and instruction periods, retaining only the conversational data. The mean length of recorded movement data was 7.7 min (range = 4.17–8.86 min), largely due to connectivity errors in two of the included dyads. We converted the x, y, z accelerometer data for each participant into Euclidean distances to create a single metric of head movement over time, and then applied a second-order Butterworth filter to smooth the data. Cross-correlation coefficients (r) served as our metric of interpersonal synchrony, since they have been a fairly common metric for synchrony in previous research (e.g., Richardson, Dale, & Tomlinson, 2009). Cross-correlation provides a measure of the influence between individuals across windows of time: By correlating individuals’ time series at varying lags, we could measure the degree to which individuals were affecting one another more broadly. Following from previous research (Ramseyer & Tschacher, 2014), we calculated cross-correlation rs within a ±2,000-ms window.

Results

The data were analyzed primarily using a linear mixed-effects model. The random-effects structure (using random slopes and intercepts) was kept as maximal as possible (Baayen, Davidson, & Bates, 2008; Barr, Levy, Scheepers, & Tily, 2013). Dyad membership was included as the sole random effect. The condition was dummy-coded prior to inclusion (0 = noise, 1 = dual-task). All other variables—including interaction terms—were centered and standardized (Baayen et al., 2008) prior to being entered into the model.

This model served two purposes: (a) to replicate previous findings of time-locked synchrony of head movements during conversation (Ramseyer & Tschacher, 2014) and (b) to explore whether low-level visual distractors would negatively impact that synchrony relative to increased working memory load. The model predicted r—our measure of interpersonal synchrony or convergence—with lag (±2,000 ms) and condition (dual-task = 1) as independent variables.

As anticipated, increases in lag significantly predicted decreases in r, providing evidence for in-phase interpersonal synchrony of head movements during conversation (ß = –.50, p < .0001). The main effect of lag indicated that partners’ head movements were most strongly correlated at lag 0—that is, in moment-to-moment comparisons. The correlation decreased as the time series were compared at increasingly disparate points.

However, contrary to our hypothesis, we found no significant difference between the noise and dual-task conditions (ß = .19, p > .30), nor a significant effect of the interaction term (ß = –.03, p > .60). In fact, the trend suggests that the opposite might be the case, with the dual-task condition being associated with higher cross-correlation coefficients (see Fig. 4). A two-sample t-test of the centered and standardized cross-correlation coefficients only at lag 0 showed a marginally significant increase in interpersonal synchrony during the dual-task condition, t(13) = –2.1, p < .06.

Fig. 4
figure 4

Interaction plot of the linear mixed-effects model for our sample application, predicting interpersonal synchrony (r: y-axis) as a function of condition (blue = noise, orange = dual task) across lags of ±2,000 ms (x-axis)

Discussion

In the present study, we explored how interpersonal dynamics during naturalistic conversation are affected by environmental factors. Inspired by previous work in the auditory domain (Boker et al., 2002), we investigated how visual distractors and increased working memory load differentially affect interpersonal synchrony by using PsyGlass to quantify head movements. Although we replicated previous findings of head movement synchrony generally (Ramseyer & Tschacher, 2014), we found conflicting evidence for the impact of these conditions on synchrony.

Although the longer-range convergence was not significantly different between the two conditions, moment-to-moment (i.e., in-phase) synchrony was marginally higher in the dual-task condition, contrary to our expectations. These unexpected results could have several implications for this literature, to be disentangled with follow-up work. First, the results could suggest that—although higher working memory load may increase lag-0 synchrony—convergence unfolds similarly over a longer timescale, regardless of the nature of the external visual stimuli.

Second, these findings could suggest a reframing of the conditions in the present study as compared with those used by Boker and colleagues (2002). Rather than interpreting the auditory noise as a distractor, it might in fact have been more similar to the dual-task condition than the visual-noise condition: Both ambient noise and the dual task may be more task-relevant and less easily ignored during conversation than the irregular blue-to-red screen switches. Perhaps the key element is that distractors should in some way be unavoidable during interaction.

Using PsyGlass: Recommendations and limitations

Below we compile a number of recommendations and limitations to consider when using Google Glass with PsyGlass. These items to consider should be useful for practical concerns about experimental design and data analysis with PsyGlass.

No prior Android experience is required, although it can be helpful. Prior experience with programming of some kind can be incredibly beneficial, especially in Java. However, resources for Glass, Android, Java, and JavaScript programming are widely available online through various online tutorials and forums. Note that compiling PsyGlass will require following the basic GDK instructions (see the Method section above).

Troubleshooting modifications to PsyGlass can take time, especially for those new to Android and Glass development. Those new to Android coding should first familiarize themselves with the basic PsyGlass program and start with incremental changes to the code, building to larger extensions. Numerous developer resources for Android and Glass are available through third-party sources (e.g., programming forums, tutorial websites) and Google Developers (Google, Inc.: https://developers.google.com/).

In its current form, PsyGlass is very battery-intensive. Researchers may consider reducing the computational strain (e.g., by reducing the sampling rate) if using the application for extended periods of time, to preserve battery life. In our example experiment, PsyGlass actively ran for a maximum of 20 min per data collection session.Footnote 8 By charging the Glass devices for up to 20 min between data collection sessions, we were able to run up to four back-to-back data collection sessions without battery problems. We imagine that this pattern could continue for longer but cannot say so from experience.

The on-board computer for Glass (which sits alongside the wearer’s right temple) can become quite warm to the touch after extended intensive use or charging. Although a very small number of participants commented on this warmth, no participants reported it as being uncomfortable, even when the Glass had been in use or charging for up to 3 h before their data collection session.

Because the Glass display does not have an opaque backing, nearby parties may be able to see some of the stimuli presented on the Glass display. Bright colors are the most easily noticeable, being recognizable from farther away than 45 feet.Footnote 9 Although the presence of most text or shapes is perceivable from approximately 90 in., large text and shapes are somewhat identifiable as close as 21 in. and are distinctly readable by around 14 in. away. Small text, however, is unreadable even at 6 in. Researchers should take this into account and perform preliminary tests to ensure that it will not impinge on the experimental goals (e.g., during deception-based tasks). However, we have heard reports of others attaching lightweight backings to the Glass, which may serve as a solution in these cases.

Although Google Glass is designed to be worn over regular glasses, it can be somewhat difficult for some wearers to comfortably wear Glass while being able to easily see the entire screen. In some cases—like our color-based example study—being able to see most of the screen clearly should suffice. However, this may be an issue for experimental designs relying on text-based prompts or stimuli. Researchers may consider altering their experimental design or restricting participant eligibility in such cases.

Many participants will likely have had little to no prior experience using Google Glass. Anecdotally, many of our participants commented on how “exciting” or “weird” Glass was. We recommend that researchers at least briefly introduce Glass to participants before beginning the experiment. An introduction to Glass minimizes participants’ awkwardness with the device and reduces the chance that participants will interfere with key Glass capabilities during the experiment (e.g., by brushing the touchpad). Researchers may use our protocol—reported in the Materials and Procedure section—as a guide.

The framework is currently designed to protect data transfer between the server and connected Glass devices. Therefore, problems with wireless Internet connections can cause PsyGlass to terminate the data collection session or disconnect the Glass from the server entirely. All data prior to termination are still saved locally on the device. By prioritizing connectivity, PsyGlass is able to ensure that all commands are executed as intended, but this may be an issue for individuals who have unreliable or difficult wireless networks. This can currently be changed by reprogramming PsyGlass, and we hope to release an alternate version that is more forgiving in this area.

Researchers may consider applying down-sampling procedures, band-pass filters, or moving averages for their data analysis, depending on project needs and the standard practices of relevant target research area(s). The high-resolution movement data provide high statistical power for time series analyses, but this power may not always be needed. An example of data manipulation and filtering has been provided above in the Analyses section.

General discussion

Wearable technology can provide researchers with opportunities to explore naturalistic behavior dynamics both in and out of the lab. PsyGlass capitalizes on Google Glass to give researchers a stimulus presentation and data collection tool in an easily customizable, open-source framework. We have provided an example application of PsyGlass to dyadic interaction research, but the paradigm is open to single- and multiparticipant studies. We welcome other researchers to join us in using or expanding PsyGlass on GitHub (www.github.com/a-paxton/PsyGlass). The openness of the Google Glass developer community stands as a resource for researchers interested in tapping into other dimensions of the Google Glass, from audio stimulus presentation to eye-camera recording.

Update regarding purchasing Google Glass

Google has recently shifted the Glass program to focus more on developers and enterprise needs through its “Glass at Work” program (https://developers.google.com/glass/distribute/glass-at-work). At the time of writing, those interested in purchasing Glass for research or educational needs may contact the Glass at Work program at glass-edu@google.com. Any changes or additional relevant information will be included on the readme file at the PsyGlass repository (http://github.com/a-paxton/PsyGlass).

Future directions for PsyGlass and wearable technology

Wearable solutions like PsyGlass and other tools (e.g., Olguín Olguín et al., 2009a) are helping researchers increase external validity and target the real-world behaviors that they are interested in exploring. Especially for complex behaviors like interaction, researchers must balance experimental controls with experimental designs targeting naturalistic behaviors. By providing wireless, portable, minimalistic behavior tracking, wearable technology can unobtrusively quantify behavioral metrics and give moment-to-moment control over stimulus presentation. These represent an addition to our tools for creating naturalistic, externally valid experiments that tap into the real-world behaviors we seek to capture. With PsyGlass, we hope to lower the barriers to entry for other researchers who are interested in capitalizing on these new opportunities.

In that vein, we intend to continue to expand PsyGlass as a methodological tool that can contribute to theoretical inquiry. Our basic goals include tapping additional Glass capabilities for data collection (e.g., gyroscope, eye-camera capture) and stimulus presentation (e.g., audio) to give researchers more experimental design and multimodal options. We have already created optional modules to implement lexical decision tasks on PsyGlass, available on GitHub. We hope to provide a suite of collection and presentation options that others can use to cobble together versions of PsyGlass that fit their needs. Our first goal for major expansion is to create a way for partners’ Glass devices to be interactively updated by each another—for instance, by having the amplitude of movement of one Glass (measured by the accelerometer) update the visual stimuli of a second, connected Glass. In doing so, PsyGlass can subtly prompt interaction dynamics and alter interpersonal behaviors on the basis of prespecified events. By putting the code onto an open community for programmers, we hope to encourage others to join us in our expansion and refinement of the PsyGlass tool.