Eye tracking under dichoptic viewing conditions: a practical solution

Abstract

In several research contexts it is important to obtain eye-tracking measures while presenting visual stimuli independently to each of the two eyes (dichoptic stimulation). However, the hardware that allows dichoptic viewing, such as mirrors, often interferes with high-quality eye tracking, especially when using a video-based eye tracker. Here we detail an approach to combining mirror-based dichoptic stimulation with video-based eye tracking, centered on the fact that some mirrors, although they reflect visible light, are selectively transparent to the infrared wavelength range in which eye trackers record their signal. Although the method we propose is straightforward, affordable (on the order of US$1,000) and easy to implement, for many purposes it makes for an improvement over existing methods, which tend to require specialized equipment and often compromise on the quality of the visual stimulus and/or the eye tracking signal. The proposed method is compatible with standard display screens and eye trackers, and poses no additional limitations on the quality or nature of the stimulus presented or the data obtained. We include an evaluation of the quality of eye tracking data obtained using our method, and a practical guide to building a specific version of the setup used in our laboratories.

Introduction

Researchers of human behavior and cognition record participants’ eye dynamics for many purposes. Gaze direction and pupil size reveal much about a person’s cognitive states and hidden intentions (Naber et al., 2013). Gaze direction, for instance, can inform about factors such as attention allocation (Deubel & Schneider, 1996; Pastukhov & Braun, 2010) and decision making (Reddi & Carpenter, 2000), while pupil size reveals aspects of visual processing (Barbur, 2003; Naber & Nakayama, 2013) and task engagement (Gilzenrat et al., 2010), among other things. Dichoptic stimulus presentation, the practice of showing two distinct images to a participant’s two eyes, also has a range of purposes, for instance in 3D entertainment and research on 3D vision (Barendregt et al. 2015; Julesz, 1971; Held et al., 2012), as well as for experimental paradigms that center on interocular conflict, such as binocular rivalry and continuous flash suppression (Blake & Logothetis, 2002; Tsuchiya & Koch, 2005). It can be important to combine dichoptic presentation and eye-tracking within a single paradigm, for instance to evaluate eye movements in 3D scenes (Erkelens & Regan, 1986; Wismeijer et al, 2010) or ocular responses to visual input that remains unseen due to interocular conflict (Rothkirch et al., 2012; Spering et al., 2011). Nevertheless, combining the two techniques remains technically challenging and fraught with limitations, due to the simple fact that the components used to achieve dichoptic stimulation, such as mirrors or prisms, often interfere with the recording of the eyes. This is specifically true for approaches that use a camera for recording the eyes (i.e., video-based eye tracking), as these rely on an unobstructed line of sight to the eyes. In this paper we present a method we recently developed, that offers a convenient and simple solution to eye recording and dichoptic stimulus presentation in parallel.

Existing work that combines dichoptic stimulation (see Law et al., 2013 for an overview of methods) with eye tracking tends to fall into one of two categories. Work in the first category uses eye-tracking techniques that do not require a clear line of sight to the eyes. For instance, the scleral coil technique (Erkelens & Regan, 1986; Kalisvaart & Goossens; 2013; Robinson, 1963) relies on a type of contact lens fitted with a metal coil or coils and inserted into the eye of a participant seated inside a changing magnetic field. The position of the eye is then inferred from the voltage induced in the coil(s). Electro-oculography (Fox et al., 1975; Leopold et al., 1995; Zaretskaya et al., 2010), in turn, relies on a difference in electical potential that exists between the front and the back of the eye. When placing a pair of skin electrodes in close proximity to the eye, this difference allows information about eye position to be inferred from the potential difference between the electrodes. The second type of study that manages to combine dichoptic stimulation with eye tracking, even video-based eye tracking, does so by separating the two eyes’ inputs without obstructing the camera’s view. This can be achieved, for instance, with commercially available goggle systems that have cameras integrated in the eye pieces (Frässle et al., 2014). Alternatively, the two eyes’ views can be separated spectrally by using anaglyph glasses (Hayashi & Tanifuji, 2012; Van Dam & Van Ee, 2006; Wismeijer & Erkelens, 2009), or temporally by combining polarized glasses with a dynamic polarization screen that transmits light of a different polarization angle on alternate monitor frames (Wismeijer et al., 2010). Each of these methods has its benefits but also its limitations. For instance, the scleral coil technique and electro-oculography are moderately invasive and do not allow pupillometry, approaches using anaglyph glasses pose limitations on the colors used in the visual display, techniques that rely on light polarization do not work with certain (flatscreen) monitors and projectors (which emit light at restricted polarization angles), and currently available goggles have low spatial and temporal eye-tracking resolution.

In this paper we describe a technique for combining video-based eye tracking, an accurate approach to eye tracking and arguably the most popular today (Mele & Federici, 2012), with dichoptic presentation by means of a mirror stereoscope. This is an arrangement of mirrors that diverts the two eyes’ lines of sight to two different visual displays, thus posing no limitations on the nature of the display or the monitor type. Although a small number of papers in the literature report positioning a video-based eye tracker such that it has a view of the eye through the space that remains on the sides of the mirrors (e.g., Rothkirch et al.; 2012; Spering et al. 2011), this is generally difficult because the mirrors leave little room for a clear line of sight. Indeed, the difficulty of getting a clear view of both eyes in this situation may be the reason why such work recorded samples from only one eye (Spering et al. 2011). To make full use of the potential of video-based eye trackers in a dichoptic stimulation setting, we recently pioneered an approach centered on the facts that video-based eye trackers generally operate in the infrared part of the light spectrum, and that some suppliers of optical equipment carry infrared-transparent mirrors. Thus, this approach replaces the mirrors in a basic stereoscope with ones that transmit infrared light, offering the camera an unobstructed view through the mirrors (Naber et al., 2011; see Carle et al., 2011 for a different implementation of this idea). Besides drawing attention to this general approach, in the present paper we will describe a version of this setup that we developed more recently with an emphasis on easy construction, and we will present an evaluation of the quality of eye data collected using this setup in combination with infrared eye trackers of two different makes.

Methods

The setup

A schematic overview of the setup, positioned on top of a standard office desk, is shown in Fig. 1. A participant seated in this setup would be seen from the back in this figure, and would rest his or her head on a standard head rest (not shown) while looking into a different mirror with each eye. The mirror setup is a variant of the classic Wheatstone stereoscope (Wheatstone, 1838), and consists of two mirrors that are both positioned at a 45° angle relative to the participant’s midline and that direct the two lines of sight in opposite directions, toward two computer monitors that face each other directly. An infrared-sensitive video-based eye tracker, including camera and illuminator, is positioned directly in front of the participant’s face but behind the mirrors, on the same plateau that supports the mirrors. The eye tracker is symbolically represented by a box in Fig. 1. One challenge in dichoptic setups that use two separate monitors is achieving satisfactory alignment of the two monitors (more on this in the Discussion section). Our setup opts for a low-tech solution of using two flat-screen monitors mounted on flexible desk mount monitor arms that allow easy adjustment, in combination with a board fixed to the desk straight below each monitor. As explained in detail in the appendix, this arrangement allows the monitors to be positioned manually by using the boards as reference (note that the boards may appear offset relative to the monitors in the 3D rendering of Fig. 1 due to limited depth cues in the image; in reality they are straight below the monitors). The appendix also provides a detailed overview of the components used in this setup, as well as assembly instructions, and touches on optional modifications to the setup that prevent participants from seeing visual stimuli directly from the corners of their eyes, besides their intended view via the mirrors. In our laboratories we have constructed two different, essentially identical, instances of the setup illustrated in Fig. 1. The mirrors in one of the setups are two front-surface mirrors advertised as “cold mirrors” with, at an angle of incidence of 45° relative to the mirror’s normal, near-complete reflectance (>95 %) of visible wavelengths between 400 and 690 nm and near-complete transmission (>90 %) of near-infrared wavelengths between 750 and 1,200 nm (Edmund Optics, stock number #64-452; dimensions 10.1 × 12.7 cm). The mirrors used in the other setup have very similar specifications: near-complete reflectance (>90 %) of visible wavelengths between 425 and 650 nm and near-complete transmission (>85 %) of near-infrared wavelengths between 800 and 1,200 nm (Edmund Optics, item discontinued; dimensions 10.1 × 12.7 cm). In a laboratory that already has standard eye-tracking materials such as an eye tracker, a head rest, and monitors, the approximate price of the additional components required for this setup would approach US$1,000 (mirrors: $400; mirror holders: $150; fiberboard, glue, etc: $100; monitor arms: $300). This price compares favorably to some alternatives such as goggle systems (Beach et al. 1998; J. Sommer, pers. comm.), although it exceeds the costs of an option like cardboard anaglyph glasses.

Fig. 1
figure1

Schematic illustration of the setup used in our experiments. The setup is positioned on top of a desk, and the participant (not shown) would be seen from the back, seated at the desk and looking into a different mirror with each eye

Fig. 2
figure2

Data collected from representative participants using the Eyelink 1000 (left figure of each panel) and the Eye Tribe (right figure of each panel) during our three tasks, involving changing background luminance (panel a), saccades (panel b) and smooth pursuit (panel c). The vertical dashed lines indicate moments of relevant experiment events: luminance changes in panel a, changes in target position for panel b, and the beginnings and ends of target movement for panel c. For these recordings one of the two mirrors was removed, so that one eye was tracked through an infrared-transparent mirror while the other one was tracked with an uninterrupted line of sight. The close correspondence between the two eyes’ recordings (quantified in the main text) illustrates that passage through an infrared-transparent mirror affects the eye signal minimally

Eye recordings

To establish the utility of this setup for experiment purposes we assessed the quality of the eye data collected with infrared eye trackers through the mirrors. Because different types of eye trackers have different characteristics (e.g., their sensors might not be equally responsive or they may use different software algorithms to identify the pupil in a camera frame), we tested our setup with two different kinds of infrared eye trackers: a desktop-mounted EyeLink 1000 recording binocularly at 1,000 Hz (SR Research Ltd., Mississauga, Ontario, Canada; used in combination with the first set of mirrors mentioned above) and an Eye Tribe Tracker recording at 30 Hz (The Eye Tribe Aps, Copenhagen, Denmark; used with the second set of mirrors mentioned above). The product specifications suggests that both trackers should work well in this setup, as they both operate at wavelengths transmitted by the mirrors (Eyelink: 890–940 nm; Eye Tribe: around 850 nm). Each of the eye trackers is part of a different version of the setup, one at Utrecht University and one at Michigan State University. Three naïve participants were tested in each of the setups.

We established data quality in two different ways. First, with the mirrors in place each participant performed a calibration-validation procedure in which he fixated on-screen dots (0.1 dva) that appeared at different screen locations, one after the other (calibration), and then performed this same procedure a second time (validation). Our measure of data quality was the correspondence in inferred gaze angle between both repetitions. In the case of the Eyelink tracker this procedure was, in fact, the 13-point calibration-validation sequence that is part of the Eyelink software. Here the outer eight calibration points trace the edges of a large rectangle centered on the middle of the screen (in our case 16.5 dva wide by 9.3 dva high); four additional points are positioned at the corners of a rectangle half that size, and the remaining point (presented first in the sequence) is at the screen center. Inferred gaze angle for each point was compared between calibration and validation by the Eyelink software per eye. The average of both eyes is reported here. In the case of the Eye Tribe tracker the participant first performed the Eye Tribe’s built-in calibration routine, after which he followed with his eyes an onscreen dot that visited the same 13 points as in the Eyelink calibration procedure. In this case inferred gaze angle was taken to be the median angle recorded for each calibration point, averaged across eyes, and the comparison was between this inferred angle and the physical angle of each onscreen point.

As a second indicator of data quality, we removed one of the two mirrors, thus providing the camera with a direct view of one eye and a view through the infrared transparent mirror of the other eye (counterbalancing across setups which mirror was removed). The participant then completed a set of three tasks involving basic ocular events, and we used the resulting data to compare the signal between the eyes. Specifically, three participants fixated a point while the screen’s brightness stepped repeatedly between 0 and 100 cd/m2; they made saccades to follow a dot that jumped between corners of an imaginary square, each corner located at 5° from fixation; and they used pursuit eye movements to follow a dot that slowly moved between these same four points at a speed of 3°/s. Without any data preprocessing, we then analyzed the number of lost samples per eye, the correlation in estimated pupil size between eyes for the first task, and the correlation in estimated gaze angle between eyes for the remaining tasks. Given that eye position and pupil size are typically yoked between the two eyes outside blink periods, any systematic deviations are indicative of a systematic influence of the mirror on the recorded signal.

Results

The calibration-validation procedure was performed without problems with the mirrors in place. We performed three repetitions for each of the eye trackers, obtaining a valid recording of gaze angle for each of the 13 calibration points in all cases. For the three participants in the setup using an Eyelink tracker the average deviations in estimated gaze angle between calibration and validation were 0.52, 0.35, and 0.66° of visual angle (dva); for the Eye Tribe tracker the deviations were 2.15, 1.76, and 4.14 dva. We will comment on the larger deviation for the latter tracker in the Discussion section.

During the short experiments with only one mirror in place the trackers generally kept good track of both pupils, regardless of whether the line of sight passed through a mirror. The Eyelink missed no samples for either eye, and for the three participants the Eye Tribe missed 17.6, 7.1, and 12 % of the samples for the eye with mirror, versus 12.5, 10.3, and 8 % for the eye without mirror (not statistically different; paired t-test across three participants, t(2) = 0.76; p = 0.52). Figure 2 shows data for one representative participant in the Eyelink setup (left figure in each pair) and for one in the Eye Tribe setup (right figure in each pair). The correspondence between eyes was overall very good for both trackers, as quantified by between-eye Pearson correlations for each of the three experiments. Specifically, for the experiment with changing background luminance (Fig. 2a), the average correlation in pupil size was 0.99 for both trackers. For the saccade experiment (Fig. 2b) the average correlations in horizontal gaze angle as well as vertical gaze angle were and 0.99 and 0.97 for the Eyelink and Eye Tribe, respectively. Finally, for the smooth pursuit experiment (Fig. 2c) the average correlations in horizontal gaze angle as well as vertical gaze angle were and 0.99 and 0.96 for the Eyelink and Eye Tribe, respectively.

Discussion

We present a straightforward method that allows experimenters to simultaneously track the eyes binocularly and present visual stimuli dichoptically. The approach, which centers on the combination of an infrared eye tracker and infrared-transparent mirrors, can be of value for studies that focus on 3D vision or interocular suppression. The proposed eye-tracking method has several benefits over previous methods that involve dichoptic stimulation. It allows a full, unblocked view of both eyes and use of large, high quality visual stimuli. Furthermore, eye-tracking data quality is limited only by the tracker used (not by the dichoptic presentation system), and existing eye-tracker hardware needs no modifications for use in this setup.

Our approach does have limitations. One limitation is that the eye tracker’s infrared illuminator, whose range of transmitted wavelengths extends into the visible range, can in some cases be seen through the mirrors, potentially contaminating the visual display. The severity of this concern depends on the particular experiment. While the illuminators are faintly visible when the screen is black, they cannot be seen when background luminance is higher. Moreover, in cases where the stimulus of interest covers a relatively small part of the screen, the illuminators can be moved not to overlap with this part. A second limitation is the low-tech method by which alignment between screens is achieved. This method might not be sufficient, for instance, for experiments that require exactly controlled binocular disparity across large regions of the screen. However, the method has proven adequate in our experiments, which tend to rely on smaller stimuli near the screen center.

We note that in our calibration-validation procedure the deviation between the two was considerably larger for the data from the Eye Tribe than for those from the Eyelink. One potential reason for this is that the Eye Tribe signal seems to contain more noise in general, as is also apparent in Fig. 2. A second potential reason is that, whereas the calibration and validation were performed in immediate succession for the Eyelink, there was a larger time gap for the Eye Tribe (see Methods), potentially allowing more participant movement. Either way, the second experiment, with only one mirror in place, showed high correlations between the two eyes’ signals for both trackers, arguing against the idea that any calibration-validation difference was to a large degree due to the presence of the mirrors.

In closing, it is worth mentioning a recent movement toward the use of pupillary and oculomotor responses instead of subjective button press reports in certain behavioral tasks that involve dichoptic stimulation (Frässle et al., 2014; Naber et al., 2011; Tsuchiya et al., 2015). Specifically, to prevent potential confounds associated with manual responses in experiments that manipulate consciousness through interocular suppression, such “no report” designs rely on eye recordings to infer perception. The method presented here provides an ideal solution for those wishing to implement such designs.

References

  1. Barbur, J. L. (2003). Learning from the pupil: Studies of basic mechanisms and clinical applications. The visual neurosciences. Cambridge: MIT Press.

    Google Scholar 

  2. Barendregt, M., Harvey, B. M., Rokers, B., & Dumoulin, S. O. (2015). Transformation from a retinal to a cyclopean representation in human visual cortex. Current Biology, 25(15), 1982–1987.

    Article  PubMed  Google Scholar 

  3. Beach, G., Cohen, C. J., Braun, J., & Moody, G. (1998). Eye tracker system for use with head mounted displays. IEEE International Conference on Systems, Man, and Cybernetics, 5, 4348–4352.

    Google Scholar 

  4. Blake, R., & Logothetis, N. K. (2002). Visual competition. Nature Reviews Neuroscience, 3(1), 13–21.

    Article  PubMed  Google Scholar 

  5. Carle, C. F., James, A. C., Kolic, M., Loh, Y.-W., & Maddess, T. (2011). High-resolution multifocal pupillographic objective perimetry in glaucoma. Investigative Ophthalmology & Visual Science, 52(1), 604–7.

    Article  Google Scholar 

  6. Deubel, H., & Schneider, W. X. (1996). Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36(12), 1827–1837.

    Article  PubMed  Google Scholar 

  7. Erkelens, C. J., & Regan, D. (1986). Human ocular vergence movements induced by changing size and disparity. The Journal of Physiology, 379(1), 145–169.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Fox, R., Todd, S., & Bettinger, L. A. (1975). Optokinetic nystagmus as an objective indicator of binocular rivalry. Vision Research, 15(7), 849–853.

    Article  PubMed  Google Scholar 

  9. Frässle, S., Sommer, J., Jansen, A., Naber, M., & Einhäuser, W. (2014). Binocular rivalry: Frontal activity relates to introspection and action but not to perception. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 34(5), 1738–1747.

    Article  Google Scholar 

  10. Gilzenrat, M. S., Nieuwenhuis, S., Jepma, M., & Cohen, J. D. (2010). Pupil diameter tracks changes in control state predicted by the adaptive gain theory of locus coeruleus function. Cognitive, Affective, & Behavioral Neuroscience, 10(2), 252–269.

    Article  Google Scholar 

  11. Hayashi, R., & Tanifuji, M. (2012). Which image is in awareness during binocular rivalry? Reading perceptual status from eye movements. Journal of Vision, 12(3), 5–5.

    Article  PubMed  Google Scholar 

  12. Held, R. T., Cooper, E. A., & Banks, M. S. (2012). Blur and disparity are complementary cues to depth. Current Biology, 22(5), 426–431. doi:10.1016/j.cub.2012.01.033

    Article  PubMed  PubMed Central  Google Scholar 

  13. Julesz, B. (1971). Foundations of cyclopean perception. Chicago: The University of Chicago Press.

    Google Scholar 

  14. Kalisvaart, J. P., & Goossens, J. (2013). Influence of retinal image shifts and extra-retinal eye movement signals on binocular rivalry alternations. Plos One, 8(4), e61702.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Law, P. C. F., Paton, B. K., Thomson, R. H., Liu, G. B., Miller, S. M., & Ngo, T. T. (2013). Dichoptic viewing methods for binocular rivalry research: Prospects for large-scale clinical and genetic studies. Twin Research and Human Genetics, 16(06), 1033–1078.

    Article  PubMed  Google Scholar 

  16. Leopold, D. A., Fitzgibbonz, Logothetis, N. K., & Logothetis, N. (1995). The role of attention in binocular rivalry as revealed through optokinetic nystagmus. Artificial Intelligence Lab Publications. Available at: http://hdl.handle.net/1721.1/6649

  17. Mele, M. L., & Federici, S. (2012). Gaze and eye-tracking solutions for psychological research. Cognitive Processing, 13(1), 261–265.

    Article  Google Scholar 

  18. Naber, M., Frässle, S., & Einhäuser, W. (2011). Perceptual rivalry: Reflexes reveal the gradual nature of visual awareness. Plos One, 6(6), e20910.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Naber, M., & Nakayama, K. (2013). Pupil responses to high-level image content. Journal of Vision, 13(6), 7–7.

    Article  PubMed  Google Scholar 

  20. Naber, M., Stoll, J., Einhauser, W., & Carter, O. C. (2013). How to become a mentalist: Reading decisions from a competitor’s pupil can be achieved without training but requires instruction. PLoS One, 8(8), e73302.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Pastukhov, A., & Braun, J. (2010). Rare but precious: Microsaccades are highly informative about attentional allocation. Vision Research, 50(12), 1173–1184.

    Article  PubMed  Google Scholar 

  22. Reddi, B. A., & Carpenter, R. H. (2000). The influence of urgency on decision time. Nature Neuroscience, 3(8), 827–830.

    Article  PubMed  Google Scholar 

  23. Robinson, D. A. (1963). A method of measuring eye movement using a scleral search coil in a magnetic field. IEEE Transactions on Biomedical Engineering, 10, 137–45.

    PubMed  Google Scholar 

  24. Rothkirch, M., Stein, T., Sekutowicz, M., & Sterzer, P. (2012). A direct oculomotor correlate of unconscious visual processing. Curbio, 22(13), R514–R515. doi:10.1016/j.cub.2012.04.04622(2),216-225

    Google Scholar 

  25. Spering, M., Pomplun, M., & Carrasco, M. (2011). Tracking without perceiving a dissociation between eye movements and motion perception. Psychological Science, 22(2), 216–225.

    Article  PubMed  Google Scholar 

  26. Tsuchiya, N., & Koch, C. (2005). Continuous flash suppression reduces negative afterimages. Nature Neuroscience, 8(8), 1096–1101.

    Article  PubMed  Google Scholar 

  27. Tsuchiya, N., Wilke, M., Frässle, S., & Lamme, V. A. F. (2015). No-report paradigms: Extracting the true neural correlates of consciousness. Trends in Cognitive Sciences, 19(12), 757–770.

    Article  PubMed  Google Scholar 

  28. van Dam, L. C. J., & Van Ee, R. (2006). Retinal image shifts, but not eye movements per se, cause alternations in awareness during binocular rivalry. Journal of Vision, 6(11), 1172–1179.

    PubMed  Google Scholar 

  29. Wheatstone, C. (1838). Contributions to the physiology of vision. part the first: On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions of the Royal Society of London, 128, 371–394.

    Article  Google Scholar 

  30. Wismeijer, D. A., & Erkelens, C. J. (2009). The effect of changing size on vergence is mediated by changing disparity. Journal of Vision, 9(13), 12–12.

    Article  PubMed  Google Scholar 

  31. Wismeijer, D. A., Erkelens, C. J., & Van Ee, R. (2010). Depth cue combination in spontaneous eye movements. Journal of Vision, 10(7), 76–76.

    Article  Google Scholar 

  32. Zaretskaya, N., Thielscher, A., Logothetis, N. K., & Bartels, A. (2010). Disrupting parietal function prolongs dominance durations in binocular rivalry. Current Biology, 20(23), 2106–2111.

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

The authors thank Pieter Schiphorst for his role in designing the setup and for providing the graphics of Fig. 1, and Steffen Klingenhoefer for his fruitful suggestions.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Jan W. Brascamp.

Appendix

Appendix

This appendix provides specifications of one of the two versions that we built of the setup described in the main text, and straightforward instructions on how to assemble its components. This version is positioned on top of a desk that is 74 cm high, 61 cm deep, and 152 cm wide. When building a version of the setup, the exact dimensions may depend on factors such as the dimensions of the desk supporting the setup and the desired distance between the participant and the screens.

Fiberboard elements

The setup is primarily built of fiberboard. The table below will detail the dimensions of the pieces of fiberboard used, distinguishing the setup’s “central component” that supports the mirrors and the eye tracker, and the setup’s two “reference boards” that are each positioned below one of the monitors. See Fig. 1 for reference. The central component’s purpose is to support the eye tracker and the mirrors, whereas the reference boards are meant to help the user position the monitors, which are mounted on flexible monitor arms that are available at office supply stores. For instance, a monitor can be positioned to rest on top of, and flush with, a reference board’s long vertical element (see table below). In addition, in our experiments we make use of a separate piece of fiberboard that is not among the components listed in the table, termed the “calibration board,”, on which we have drawn reference lines indicating the desired positions of the edges of the monitors. When positioning the monitors we align this separate calibration board with one of the reference boards and move the corresponding monitor to line up with the desired reference lines. When it comes to how exactly to position the two monitors, a main goal is to make sure that the two monitors’ images are easily fused by each participant. In one approach we have successfully applied, we use the reference boards and calibration board to, for all participants, orient the monitors in a frontoparallel position and centered on the centers of the mirrors. Because each participant is positioned slightly differently in the setup, we then allow each participant to displace items presented on the two screens so that they visually align. For instance, participants may move a dot shown in alternation on the left and right monitors until its visual displacement is minimal, after which we use the resulting on-screen positions as the display centers for that participant.

Component Dimensions (cm) Number Remark
Central component
  80 × 25 × 2 1 Horizontal top
  23 × 25 × 2 1 Horizontal bottom
  21 × 32 × 2 1 Central vertical
  32 × 25 × 2 1 Front-facing vertical
Reference boards
  61 × 11 × 2 2 Long horizontal
  66 × 20 × 2 2 Long vertical
  11 × 15 × 2 4 Small vertical

The pieces listed in the table above were all painted black to reduce light scatter (applying black fabric may reduce light scatter even more), assembled using glue and screws, and clamped to the desk using bar clamps, keeping in mind the following considerations. The central component’s top element was placed such that it reaches 8 cm short of the front of the desk, thus leaving space for the participant’s face when positioned in a head rest mounted on the front edge of the desk. The reference boards, in turn, were arranged such that the long horizontals are exactly aligned with the edges of the desk, whereas the long verticals stick out 4 cm beyond the front of the desk. The reason for this is that it provides easy access for clamping the above-mentioned calibration board to one of the reference boards using a bar clamp when positioning a monitor.

Remaining elements

We use flat-screen monitors mounted on standard monitor arms clamped to the side of the desk (clamping both the reference board and the desk). These arms allow translation in three dimensions as well as rotation in the plane of the screen. Our mirrors are mounted on mirror mounts that are sold for the purpose by the same suppliers that stock cold mirrors. In our case we have connected these mounts to the fiber board using two-sided tape. One thing to keep in mind here is that the mirrors in this setup are positioned to touch at a 45° angle in the center, right before the participant’s nose (see Fig. 1). Because the mounts are usually considerably thicker than the mirrors themselves, this either requires mounts that allow the mirrors to be shifted off to the side so they can touch without the mounts touching (as in Fig. 1), or mounts that are less wide than the mirrors so that the mirrors stick out at the sides. With regard to the eye tracker itself, this can be positioned freely on top of the central component as it would be on a desktop. The central component provides enough space to achieve a suitable distance between the eyes and the tracker (the prescribed distance is 50–70 cm for the Eyelink 1000 and 45–75 cm for the Eye Tribe Tracker). Finally, some experiments require that participants are unable to see the screens directly from the corners of their eyes (on the temporal side). In our setup we prevent such a direct view by custom-made “blinders” made of black paper that are connected to the two posts of the participant’s head rest, on either side of the head. An additional blinder (not part of our setup) extending from the central line midway between the mirrors to the center of the face would restrict the field of view to the desired area even further. An alternative, but more complex, way of restricting the field of view would be to build funnel components that extend from the mirrors’ borders towards the monitors’ edges (this was the approach taken in the setup that inspired the present one; Naber et al., 2011).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Brascamp, J.W., Naber, M. Eye tracking under dichoptic viewing conditions: a practical solution. Behav Res 49, 1303–1309 (2017). https://doi.org/10.3758/s13428-016-0805-2

Download citation

Keywords

  • Eye tracking
  • Pupil
  • Gaze
  • Dichoptic viewing
  • Stereoscope
  • Mirror
  • Infrared