Abstract
Eye tracking accuracy is affected in individuals with vision and oculomotor deficits, impeding our ability to answer important scientific and clinical questions about these disorders. It is difficult to disambiguate decreases in eye movement accuracy and changes in accuracy of the eye tracking itself. We propose the EyeRobot—a low-cost, robotic oculomotor simulator capable of emulating healthy and compromised eye movements to provide ground truth assessment of eye tracker performance, and how different aspects of oculomotor deficits might affect tracking accuracy and performance. The device can operate with eccentric optical axes or large deviations between the eyes, as well as simulate oculomotor pathologies, such as large fixational instabilities. We find that our design can provide accurate eye movements for both central and eccentric viewing conditions, which can be tracked by using a head-mounted eye tracker, Pupil Core. As proof of concept, we examine the effects of eccentric fixation on calibration accuracy and find that Pupil Core’s existing eye tracking algorithm is robust to large fixation offsets. In addition, we demonstrate that the EyeRobot can simulate realistic eye movements like saccades and smooth pursuit that can be tracked using video-based eye tracking. These tests suggest that the EyeRobot, an easy to build and flexible tool, can aid with eye tracking validation and future algorithm development in healthy and compromised vision.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Eye tracking is essential for the understanding of certain visual and oculomotor disorders, such as macular degeneration (Verghese et al., 2021), strabismus (Agaoglu et al., 2015; Kenyon et al., 1981), or nystagmus (Rosengren et al., 2020). While much progress has been made, these disorders add complexities to eye tracking approaches, such as large misalignment between the visual (fixational locus to object) and pupillary axes, decreased fixational accuracy and increased fixational instability, and potential misalignment of the two eyes. Some of these issues have been studied extensively in traditional, corneal reflection (glint)-based systems (Tarita-Nistor et al., 2015) and using devices that allow for ground truth measurements using retinal imaging (Shanidze et al., 2016), but interpretation of eye tracking data in these disorders is complicated and faces potential confounds. At the same time, these disorders provide a fertile research space, as much is yet to be understood about how they can impact activities of daily living and mobility, and they simultaneously provide a fascinating model of neural changes associated with the disorders (e.g., eccentric oculomotor reference, loss of visual field, eye disconjugacy).
The advent of head-based eye tracking devices allows for a wider set of research questions and studies that focus on ecological validity (by allowing gaze tracking that includes eye and head movements) and issues such as mobility. In recent years, several low-cost head-based systems have emerged, making their widespread use in laboratories and clinics possible. This expansion can allow for a more complete understanding of eye and gaze movement deficits that might occur in oculomotor disorders, where head movements may be an especially important consideration (Ehinger et al., 2019; Shanidze & Velisar, 2020). One example of such a device is the Pupil Labs Pupil Core eye tracking platform that consists of custom, open-source data collection software and a wearable eye tracking headset. The device is low-cost, low-weight, and comes with an open-source suite of software and modular hardware that can be modified to researchers’ specific needs. The device uses a glint-free, computer vision-based algorithm for eye tracking (Swirski & Dodgson, 2013) that estimates gaze direction by optimizing parameters of an eye model. The algorithm has certain built-in assumptions, such as the alignment of the visual and pupillary axes and eye convergence on the target during calibration. These assumptions, however, are often violated in individuals with oculomotor and visual disorders. In strabismus, for example, the mean eye deviation is 14.2° (Economides et al., 2015), far from the ocular alignment assumed in gaze point estimation.
In the case of macular degeneration (MD), the central retina—including the high-resolution fovea—is often damaged, and one or more peripheral preferred retinal loci (PRLs) may develop (Crossland et al., 2005). With enough time, these can become a new oculomotor reference (White & Bedell, 1990), although they may also be consumed as the disease progresses and new ones can develop. Furthermore, PRLs need not be (and rarely are) in corresponding locations on both retinas. With eccentric PRLs, individuals also tend to have greater fixational instability and less accurate oculomotor behaviors, notably saccades (reviewed in Verghese et al., 2021). These deficits may affect eye tracking calibration and fidelity, and this detriment may be variable for different types of eye trackers. For example, we demonstrated both increases in calibration error in MD participants and variations between trackers in a prior report (Love et al., 2021), where eye tracking calibration errors in MD participants were somewhat greater than in age-matched controls using the EyeLink 1000, and significantly greater using the Pupil Core eye tracking platform (the latter tested using head-restrained and head-unrestrained calibration approaches for a more direct comparison to the EyeLink). This increase in calibration error may be because MD-related oculomotor changes are in violation of the assumptions of the gaze estimation algorithm, leading to greater calibration and tracking errors. Indeed, Love et al. (2021) showed that calibration errors increased with larger fixation eccentricities. However, due to the confluence of oculomotor and vision deficit-related factors, it is difficult to say what is the exact cause. The misalignment of the visual axis and reduction in fixational stability due to eccentric fixation, difficulty in finding or resolving the calibration target, nonlinear eye placement on the calibration targets (e.g., due to multiple fixational loci used throughout calibration), or deficits in saccadic latency and accuracy could all play a role in reducing calibration accuracy. Without a ground-truth method to evaluate these factors independently and in combination, it is difficult to assess whether measured changes (compared to controls) in eye tracking accuracy and effectiveness are the result of some of the oculomotor changes associated with MD, or due to poor tracking by the device itself.
To assess eye tracking accuracy in the presence of disease-related oculomotor changes, we present here a low-cost robotic oculomotor simulator (EyeRobot) that can provide ground truth assessment of eye tracking fidelity. The EyeRobot consists of two independently controlled eyes that can rotate horizontally and vertically, and can be calibrated for eye tracking using standard eye tracker calibration routines. The robot eyes can be statically or dynamically positioned in a relatively precise manner for a range of conjugate and disconjugate behaviors, including saccades and smooth pursuit in the three-dimensional space, as well as emulating behaviors associated with visual/oculomotor deficits (e.g., eccentric fixation, fixational instability, disconjugate eye movements/fixation). The device is designed to be easily manufactured and used for validation purposes with a range of video-based glint-free eye trackers and in any setting where eye tracking with clinical populations is performed.
As a proof of concept, we use the EyeRobot to calibrate an eye tracker (Pupil Core) by emulating both central and eccentric fixation, and examine the effects of eccentric fixation on eye tracking accuracy. We find that fixation eccentricity can be compensated for by standard eye tracking algorithms in the Pupil Capture software through a rotation of the coordinate frame. Prior design of the EyeRobot was presented previously in a short form (Love et al., 2021).
Methods
We propose two versions of the EyeRobot—a custom-manufactured design optimized for rigidity and stability (Figs. 1 and 2), and a 3D-printable design for easier manufacture and assembly (Fig. 3). The results are presented for the custom design.
Design
Complete parts lists for both the custom-made prototype and the 3D-printed version are included in the Supplementary Information (Prototype: Tables A–D, 3D Print version: Table E).
Hardware
Eyes
The EyeRobot eyes are 3D-printed semi-spheres, made to have similar dimensions as an adult eyeball (30 mm diameter, compared to human: 21–27 mm, Bekerman et al., 2014). The pupils consist of a 6.35 mm black circle, with a 2.5 mm aperture in the center to allow for a laser diode to be placed in the center of the eye (Supplementary Figure B). The laser diode provides a projection of where the eye is pointed, allowing external verification of the eye tracker’s estimated position. The eyes are mounted to an aluminum frame (Fig. 2), which can be pivoted to provide X-axis (left/right) motion. Each eye is attached to the frame on its horizontal axis and has an arm protruding from the back that is moved up and down to provide Y-axis (up/down) motion. The motion is driven by stepper motors, described below.
Motors
Each axis of rotation for each eye is coupled to a stepper motor (28BYJ-48, see Fig. 2). The X-axis motor is mounted below the aluminum frame, while the Y-axis motor is mounted at the top of the frame, with a metal lever connecting it through an intermediate link to the arm protruding from the eye. Each step of the motor corresponds to approximately 1/11th of a degree. The maximum rated step frequency for the motors is 100 Hz; however, it was found that modifying the unipolar 28BYJ-48 stepper motors for bipolar operation resulted in higher attainable speed and eliminated the tendency to miss steps (in our test of 4000 consecutive steps per motor, we detected zero missed steps across all four motors with the bipolar modification). To further improve the speed, the motors are powered from a 7.5-volt supply rather than the specified 5 volts.
Control board
The motors are connected to HiLetgo model A4988 Stepstick stepper driver boards that are controlled by an Arduino Uno R3.
Frame/mount
The eye/frame/motors assemblies are mounted to a wooden frame that provides a rigid platform for all of the EyeRobot components, including the control board. Two adjustable articulated mounts for the eye tracking cameras are attached to the front of the wooden frame to provide a fixed location for the eye tracker (Fig. 1).
Power supply
The EyeRobot motors are powered using a 7.5 V, ≥ 1.5 ampere standard power adapter. The lasers are powered using the Arduino’s 5-volt pin.
Circuitry
The four HiLetgo A4988 driver boards have two inputs: step and direction (dir), which are connected to eight of the Arduino’s GPIO pins (Supplementary Figure A). The Arduino provides independent signals for direction and step initiation for each stepper driver board (i.e., eight signals in total for the four boards), allowing for fully independent motion of each eye.
Alternative 3D-print version
We have replicated this manually manufactured design with 3D-printed parts, allowing for simpler and more standardizable manufacturing and assembly (Fig. 3). The 3D-printed version was modified to be built with full-size hobby servo motors which have a standard form factor and can provide additional movement functionality if needed.
Robot Capabilities and Programming
The design of the EyeRobot allows each eye to be moved independently or together along 2 degrees of freedom (X and Y; the majority of available eye trackers do not measure or calibrate torsion). The eye movement range exceeds that of the human eye-only oculomotor range (~40°, Stahl, 2001) and can move beyond the range where the eye cameras can still see the pupils, which is the limiting factor for testing.
The EyeRobot eye movements are controlled using the Arduino programming language, run on the Arduino board. To first translate between stepper motor positions and EyeRobot eye locations, for each experiment we performed a manual calibration of positioning the EyeRobot’s eyes to the correct locations while reading and saving the motor positions. We then converted the desired eye locations over the course of each experiment to their corresponding stepper motor positions. The Arduino also controls the RPM of the stepper motors, allowing for different eye movement speeds.
Eye movements are programmed as a stream of timing, motor position (representing eye locations), and motor speed that are sent from the Arduino to the four stepper motors simultaneously. This format allows the EyeRobot to be programmed to make any desired set of eye movements, with each independently controlled motor moving separately or in tandem. Thus, the eyes could be programmed to have matching or different trajectories and start in alignment or offset from each other. For more details on how the EyeRobot is programmed, see Supplementary Information.
Robot accuracy and precision testing
Several accuracy tasks were performed. For each behavior, the motor positions that aligned with the desired locations were determined, and the EyeRobot was then set to move to those locations in a pre-programmed sequence, repeated five times for the incremental steps test and 25 times for the star pattern test. For both tasks, the center location was tested additional times (35 for incremental steps and 124 for star), as that was the starting and ending location for all sequences. Position feedback was provided using the laser diodes in the two eyes and recorded using an external video camera (scene camera, see Eye Movement Recording). A physical grid at 1 m was used to map the video locations to degrees.
Incremental steps
First, we tested each of the EyeRobot eyes moving to locations 2°, 4°, 6°, 8°, 10°, and 12° from center in the horizontal and vertical directions (incremental steps, Fig. 4a). Each eye (and thus motor) was tested independently.
Star pattern
We also tested a large (15°) and small (3°) star pattern (4 cardinal and 4 oblique directions, Fig. 4b). Each location was tested 25 times. The eyes were programmed to move to each location in tandem to test the simultaneous operation of several (2 for cardinal directions and 4 for oblique) of the motors.
Analysis
Video data were saved for analysis. For the star pattern test, target locations were identified using Pupil Player and marked as reference locations. Eye positions were detected manually for each location and marked in the scene by an experimenter. Location coordinates (in pixels) were then saved for comparison. For the small star task, a subset of landings for each location was analyzed: the first 28 at the center location and the first seven for all but the two horizontal locations (N = 6).
Eye tracking
Eye tracking was performed using the Pupil Labs eye tracking cameras (120 Hz) mounted in front of the robot (Fig. 5) such that cameras could be adjusted analogously to the Pupil Core design. The scene camera was adapted from a Dell OptiPlex 7440 all-in-one desktop computer (camera part number 1C4W1 01C4W1 CN-01C4W1) and sampled at 30 Hz. The camera was mounted 83 mm above the pupils, centered between the eyes, with the front of the camera lens 8 mm back from the front of the eyeballs. The camera could be rotated vertically (in pitch), analogously to the Pupil Core design (Fig. 5).
Eye cameras were positioned to capture the entire range of eyeball motion on the calibration task, and Pupil Capture software (Pupil Labs, Munich, Germany) was used to calibrate and record eye tracking data. In the first experiment, only one eye was tracked. Due to the open, unobscured nature of the robotic eyeballs, the eye cameras’ built-in infrared illuminators were dimmed using masking tape wrapped around the illuminator portion of the camera for better pupil detection. All recordings were done under sufficient illumination for scene camera images to be clearly visible for future analysis.
Prior to calibration, the eyes were set to move in a large circular pattern (large enough to cover the calibration area) and three to four of these eye rotations were used to inform the eye model optimization routine. The built-in Pupil Core eye model optimization algorithm finds the center and radius of a 3D sphere that represents the eyeball, and estimates the pupil using a circle that rotates tangentially to the sphere such that the 3D projection of the circle on the 2D eye camera image plane is consistent with the ellipse size and orientation calculated by the pupil detection algorithm. The algorithm does this for each instance of pupil detection during the eye rotation motion. The projection of the sphere back on the eye image camera is then used as feedback on the model fit (Swirski & Dodgson, 2013). Once a good eye model fit was achieved, the model was fixed for the rest of the experiment. Interestingly, the best fit models reported pupil diameters of maximum 5 mm (depending on the skew of the pupil ellipse, the pupil diameter varies with larger diameters reported for near-round ellipses), while the actual pupil size was 6.35 mm. This misestimate may be at least partially due to known errors in pupil size estimation using video-based eye trackers, such as the pupil foreshortening effect (Hayes & Petrov, 2016; Petersch & Dierkes, 2021), where experimental geometry such as the eye camera distance and gaze angle may lead to the underestimation of the pupil size.
Calibration
A nine-point calibration grid (Fig. 6) was used to calibrate the device. The laser beam from each eyeball was used as feedback to where the optical axis of the eye would be oriented. Calibration was performed using the “Physical Marker Calibration” option in Pupil Capture software (Pupil Labs, Berlin, Germany). Markers were printed targets provided by Pupil Labs (v. 0.4, Fig. 6) for easy identification by the Pupil Labs software. The recorded data were processed in Pupil Player and accuracy and precision were calculated using the calibration set designated as “validation.”
In the first set of experiments (monocular), stimuli were shown at 0.5 m from the robot. The calibration marker size was optimized for detection by the Pupil Capture algorithm. In the second set of experiments (binocular), markers were shown at a distance of 1 m. Calibration (and validation) markers were adjusted to be 2.5 cm (1.4° vis angle) to be identifiable by the scene camera. The calibration field size was 16° W × 14° H and validation field size was 12.5° × 12.5°.
In each experiment, the EyeRobot was programmed to make eye movements to focus on each of the nine calibration points in the order required by the calibration routine. The accuracy of eye movements was verified by ensuring the laser from the eyes were on the displayed calibration targets.
Calibration with eccentric fixation
Two eccentric fixations were used: a 5° horizontal offset and a 11° horizontal + 12° vertical offset (Fig. 6). These offsets correspond to the laser position for a central target. For example, for the 5° offset, when the target was shown in the center, the EyeRobot was positioned such that the laser was at 5° to the right of target. The eccentric position for each calibration location was determined geometrically and marked with a small dot as reference to verify that the EyeRobot position was correct. This offset was maintained for all target locations of the nine-point grid (Fig. 6 center and right panels). The experiment was performed binocularly and monocularly. Here we report the monocular data. The nine-point grid sequence was repeated at each eccentricity and used as a validation set for calculating the accuracy and precision.
Saccades and ramps
To test the artificial eye’s ability to be consistently tracked by the eye tracker during standard oculomotor behaviors, we also tested large and small saccades, and slower-velocity (12°/s) ramp movements (smooth pursuit). For saccades, the EyeRobot made 24 saccades in eight directions (four cardinal and four oblique) each with two saccade amplitudes: 15° and 3° (same configuration as Fig. 4b). For ramps, the EyeRobot made constant velocity horizontal movements at 2°/s. The movements were −10° to 10° of center, for a total displacement of 20°.
In each experiment, we used the Pupil Labs eye tracker to record the eye movements of the EyeRobot, and compared the measured movements with the programmed movements.
Data analysis
Data were analyzed using the built-in Pupil Player functionality. Additional analyses were performed using MATLAB (MathWorks, Inc., Natick, MA, USA) and Prism (GraphPad Software, Inc., San Diego, CA, USA). Normality was tested using the Kolmogorov-Smirnov test.
Results
Robot accuracy and precision
Steps
EyeRobot eye landing locations and variability are included in Table 1 and are visualized in Fig. 7. For the step task, the largest EyeRobot error was seen in the left eye, vertical motor, and was 0.22° (4° target). For all motors, the maximum error ranged between 0.16° (right eye, horizontal) and 0.22°. Accuracy of 0.22° is well within the accuracy of video-based eye trackers, including the EyeLink 1000—an industry standard with accuracy of 0.57°—and the Pupil Core, which has an estimated accuracy of 0.82° (Ehinger et al., 2019).
The EyeRobot precision (measured as the standard deviation of the landings for each target location) ranged between 0° and 0.18°, with the largest variability seen for the right eye vertical motor, which overall had the poorest precision. The other motors’ precision was at or below 0.1°. We believe that this difference in performance for different locations is likely due to slight variations in the motors themselves, which are subject to the imperfections of the manufacturing process. The best-case precision of the Pupil Core device has previously been reported at 0.12°, while precision of the EyeLink 1000 was reported at 0.023° in the same study (Ehinger et al., 2019).
Star
Similar analysis was performed for the star configuration (Methods, Section 2.3.2). For this motion, we examined the movement of the motors in tandem. Thus, we analyzed the combined gaze from the two eyes (the laser point overlap). The errors are summarized in Tables 2 and 3 and are illustrated in Fig. 8.
Note, for the (0,0) location, N = 124. For the (0,15) location, the lasers did not completely overlap on the landings. Since the two could not be disambiguated, both measurements have been included in the average (N = 48)
Note, for the center (0,0) location, N = 28, N = 7 for all but the two horizontal locations, where N = 6
The accuracy and precision of the EyeRobot were comparable on this task to that of the incremental step task.
Eye tracking
As mentioned in methods, three calibration experiments were performed in which the offset between the optical axis (laser beam) and the visual axis was increased to emulate normal (no offset) and eccentric fixation (Fig. 6).
Central fixation calibration
In the first experiment we modeled an alignment between the visual and optical axes of the eye. This strategy emulates the assumptions of the Pupil Core eye tracking algorithm. As expected, calibration accuracy (0.5°) and precision (0.1°) were quite high (Fig. 9a).
Eccentric Fixation Calibration
In the second experiment, we increased the offset between the optical and visual axes to 5° horizontally to the right, consistent with offsets measured between the optical and foveal axes in healthy adults (Basmak et al., 2007). As seen in Fig. 9b, after calibration the measured gaze is aligned with the marker. This suggests that the fixed bias in eye rotation should not impact the accuracy and precision of the measurement. The eye tracker calibration procedure was able to compensate for the fixed bias in eye position and validation-reported accuracies of 0.4° and precision of 0.1° (Fig. 9b).
We subsequently increased the offset further to 11° right horizontal and 12° up vertical, corresponding to the laser beam location relative to the calibration target (Fig. 9c). This offset was chosen as representative of an MD patient with a large central scotoma (>32.5° in diameter) and fixation eccentricity of ~16°. Calibration accuracy was measured at 0.6°, and precision remained at 0.1°. Overall, there was only a small change in measurement accuracy with increased eccentricity, suggesting that the eye tracking algorithm is able to compensate for a fixed bias across all calibration targets.
Saccades and constant velocity ramps (smooth pursuit)
Figure 10 illustrates the eye tracking data for the star accuracy experiment in Section 3.1.2, Fig. 8 (green triangles). The traces in Fig. 10c highlight the consistency of the EyeRobot’s simulated saccade-like movements.
The stability of the EyeRobot’s velocity is further confirmed in Fig. 10d, showing a series of constant velocity movements at 12°/s (2 RPM). The change in position is constant and continuous across the repeated movements during the trial. For both movements, the eye tracker was able to track both eyes throughout the experiment.
Discussion
In this paper we propose a low-cost robotic oculomotor simulator, EyeRobot, specifically designed to emulate both the eye movement behaviors in healthy individuals and those known to occur in visual and oculomotor disorders. EyeRobot provides a ground truth reference for validating the eye tracking calibration accuracy of video-based, glint-free eye trackers in the cases where eye movements do not match the assumptions based on healthy individuals. In this initial deployment, we simulate calibration performed under “healthy” conditions of central fixation, absolute fixational stability and appropriate eye alignment; and “unhealthy” condition of eccentric fixation that commonly occurs in disease, such as age-related macular degeneration.
The robot’s design allows for complete and independent programming of the movement of each eye. This design thus is capable of emulating other oculomotor changes beyond eccentric fixation. While prior work has demonstrated the use of laser-guided artificial eyeball for eye tracker validation in a static eyeball (e.g., Hayes & Petrov, 2016), the EyeRobot provides dynamic functionality that can capture deficits in a wide range of oculomotor behaviors.
EyeRobot performance
We tested the EyeRobot functionality in several ways. First, we tested the accuracy and precision for each axis of rotation and eye separately. This approach allowed us to understand the individual motor performance of the assembled device. We found the accuracy error to be smaller than the resolution limit of most video-based eye trackers. Subsequently, we tested binocular motion along the cardinal and oblique directions. This approach allowed us to examine the simultaneous operation of several (up to all 4) motors. Again, we found the EyeRobot accuracy and precision to be quite high, though future testing with different color eye diodes that would allow for simpler demarcation of the two eyes could be useful to examine variations in accuracy and precision across target locations for each eye.
Eye tracking performance with the EyeRobot
We used the Pupil Core eye tracking software and hardware to track EyeRobot’s eye movements. We found that the eye tracker was able to detect the artificial pupil and estimate an appropriate eye model of the eye. We did find that pupil size was underestimated in the model, which could lead to estimation errors of the 3D gaze position. This underestimate did not affect accuracy and precision estimated by the Pupil Labs software in the scene camera image. In addition to several calibration and validation tasks, we were able to successfully track binocular saccades and smooth velocity eye movements made by the robot.
Eye tracking with eccentric fixation
In addition to experiments with central (“healthy”) fixation, we also performed two calibration experiments where the EyeRobot was set to emulate eccentric fixation (where the visual and optical axes do not align). We found that regardless of the fixation eccentricity, the eye tracking algorithm was able to compensate for the offset and similar calibration accuracies were achieved. This finding is consistent with the eye rotation estimation algorithm used by Pupil Core, where the eye rotation vector coordinate frames are rotated relative to each other so that they are aligned in the scene camera space. In our experiment, we introduced a constant offset between the target locations and the orientation of the visual axis. Our findings suggest that the size of this constant offset can be compensated for by the algorithm with the rotation of the eye camera coordinate frames relative to the scene camera. In other words, in the eccentric viewing condition, the eye camera is treated as having an additional rotation that corresponds to the offset. Thus, the coordinate frames can be adjusted to account for visual axis misalignment in MD. As decreases in tracking accuracy in macular degeneration have previously been reported (Love et al., 2021) with increasing fixation eccentricity, this outcome suggests that additional behaviors associated with foveal loss and eccentric fixation, such as fixation instability and use of multiple fixation loci, must be modeled independently and in combination to better understand the sources of eye tracking errors reported previously. Further, the EyeRobot can be used to develop and improve calibration algorithms that take into account larger fixation instabilities that may also be spread out over more elongated or irregular fixation regions (Shanidze et al., 2016).
Limitations
The EyeRobot tested is a custom-built device that requires tools and expertise for the manufacture of precision-built metal parts and may pose construction challenges in some settings. To simplify the process, we also developed a 3D-printed design. Using the standard, easily printed components, such a design would allow this device to be deployed in many locations without the need for machine shop or special tool access.
While the current device uses red laser diodes in both eyes, for future iterations we suggest the use of different color laser diodes in the two eyes for easy disambiguation between each eye’s position.
The device is designed to be low-cost for easy replication. However, as can be noted in Fig. 7 and Table 1, variation across the motors is possible. Higher accuracy and precision, as well as better motion control, may be achieved using robotics-specific servo motors (e.g., Dynamixel Robot Servos). Such design would increase the cost by approximately fourfold, but may be warranted depending on the specific application. We deliberately provide a highly modular hardware design that can easily be modified for other motor form factors and specific experimental needs.
Future directions
Given the flexible nature of the EyeRobot’s design, the device can be used to model several oculomotor deficits that may affect eye tracker calibration and accuracy, such as a large deviation between the eyes seen in strabismus or directional fixational instability in nystagmus. The device can work with any glint-free video-based eye tracker or post hoc eye tracking algorithm, including as a testing platform for custom-built prototypes. More broadly, video-based eye tracking is subject to a range of environmental, participant and experimenter, and methodological factors that can affect eye tracking accuracy and performance (Holmqvist et al., 2022). The EyeRobot can provide researchers with a means to assess the degree of noise that these different factors may contribute. For example, the device could be used for determining the effect of specific lighting conditions for the oculomotor behavior and range of interest or, with appropriate camera mounts, the effects of ambient vibration on camera movement relative to the eye can be assessed in the context of specific eye movements being investigated (magnitude, horizontal vs. vertical motion, etc.).
Conclusions
The EyeRobot provides a simple and easily configurable method to obtain ground truth measurements for eye tracker calibration and validation. While this device can be useful for a number of applications under healthy eye tracking, it is particularly instrumental in the cases of oculomotor dysfunction where care must be taken to disambiguate behavioral changes due to disease from eye tracker performance decrement due to the eye tracking algorithm being unable to accommodate behavioral changes.
We suggest that the EyeRobot could further be used to develop more robust eye tracking algorithms that are able to accommodate eye tracking of individuals with visual and oculomotor abnormalities.
References
Agaoglu, S., Agaoglu, M. N., & Das, V. E. (2015). Motion Information via the Nonfixating Eye Can Drive Optokinetic Nystagmus in Strabismus. Investigative Ophthalmology & Visual Science, 56(11), 6423–6432. https://doi.org/10.1167/iovs.15-16923
Basmak, H., Sahin, A., Yildirim, N., Papakostas, T. D., & Kanellopoulos, A. J. (2007). Measurement of angle kappa with synoptophore and Orbscan II in a normal population. Journal of Refractive Surgery (Thorofare, N.J. : 1995), 23(5), 456–460.
Bekerman, I., Gottlieb, P., & Vaiman, M. (2014). Variations in Eyeball Diameters of the Healthy Adults. Journal of Ophthalmology, 2014, 503645. https://doi.org/10.1155/2014/503645
Crossland, M. D., Culham, L. E., Kabanarou, S. A., & Rubin, G. S. (2005). Preferred retinal locus development in patients with macular disease. Ophthalmology, 112(9), 1579–1585. https://doi.org/10.1016/j.ophtha.2005.03.027
Economides, J. R., Adams, D. L., & Horton, J. C. (2015). Variability of Ocular Deviation in Strabismus. JAMA Ophthalmology, 134(1), 1–8. https://doi.org/10.1001/jamaophthalmol.2015.4486
Ehinger, B. V., Groß, K., Ibs, I., & König, P. (2019). A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000. PeerJ, 7(2), e7086–e7043. https://doi.org/10.7717/peerj.7086
Hayes, T. R., & Petrov, A. A. (2016). Mapping and correcting the influence of gaze position on pupil size measurements. Behavior Research Methods, 48(2), 510–527. https://doi.org/10.3758/s13428-015-0588-x
Kenyon, R. V., Ciuffreda, K. J., & Stark, L. (1981). Dynamic vergence eye movements in strabismus and amblyopia: asymmetric vergence. The British Journal of Ophthalmology, 65(3), 167–176. https://doi.org/10.1136/bjo.65.3.167
Rosengren, W., Nyström, M., Hammar, B., Rahne, M., Sjödahl, L., & Stridh, M. (2020). Modeling and quality assessment of nystagmus eye movements recorded using an eye-tracker. Behavior Research Methods, 52(4), 1729–1743. https://doi.org/10.3758/s13428-020-01346-y
Shanidze, N., & Velisar, A. (2020). Eye, head, and gaze contributions to smooth pursuit in macular degeneration. Journal of Neurophysiology, 124(1), 134–144. https://doi.org/10.1152/jn.00001.2020
Shanidze, N., Fusco, G., Potapchuk, E., Heinen, S., & Verghese, P. (2016). Smooth pursuit eye movements in patients with macular degeneration. Journal of Vision, 16(3), 1–1. https://doi.org/10.1167/16.3.1
Stahl, J. S. (2001). Eye-head coordination and the variation of eye-movement accuracy with orbital eccentricity. Experimental Brain Research, 136, 200–210. https://doi.org/10.1007/s002210000593
Tarita-Nistor, L., Eizenman, M., Landon-Brace, N., Markowitz, S. N., Steinbach, M. J., & González, E. G. (2015). Identifying Absolute Preferred Retinal Locations during Binocular Viewing. Optometry and Vision Science : Official Publication of the American Academy of Optometry, 92(8), 863–872. https://doi.org/10.1097/opx.0000000000000641
Verghese, P., Vullings, C., & Shanidze, N. (2021). Eye Movements in Macular Degeneration. Annual Review of Vision Science, 7(1), 1–19. https://doi.org/10.1146/annurev-vision-100119-125555
White, J. M., & Bedell, H. E. (1990). The oculomotor reference in humans with bilateral macular disease. Investigative Ophthalmology & Visual Science, 31(6), 1149–1161
Holmqvist, K., Örbom, S. L., Hooge, I. T. C., Niehorster, D. C., Alexander, R. G., Andersson, R., et al. (2022). Eye tracking: empirical foundations for a minimal reporting guideline. Behavior Research Methods, 1–53. https://doi.org/10.3758/s13428-021-01762-8
Love, K., Velisar, A., & Shanidze, N. (2021). Eye, Robot: Calibration Challenges and Potential Solutions for Wearable Eye Tracking in Individuals with Eccentric Fixation, ETRA '21 Adjunct: ACM Symposium on Eye Tracking Research and Applications, Article No. 16, 1–3. https://doi.org/10.1145/3450341.3458489
Petersch, B., & Dierkes, K. (2021). Gaze-angle dependency of pupil-size measurements in head-mounted eye tracking. Behavior Research Methods, 1–17. https://doi.org/10.3758/s13428-021-01657-8
Swirski, L., & Dodgson, N. (2013). A fully-automatic, temporal approach to single camera, glint-free 3D eye model fitting. Proc PETMEI, 1–11.
Acknowledgements
This work was supported by National Eye Institute Grant R00-EY-026994 (to N. S.), a Sigma Xi Grant-in-Aid of Research (to K.L.) and the Smith-Kettlewell Eye Research Institute.
Author information
Authors and Affiliations
Corresponding author
Additional information
Open practices statement
The data collection and analysis software, as well as the accompanying data for this study, are available at: https://bitbucket.org/avelisar/irobotsonny2/src/master/. None of the experiments was preregistered.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
ESM 1
(PDF 2248 kb)
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lotze, A., Love, K., Velisar, A. et al. A low-cost robotic oculomotor simulator for assessing eye tracking accuracy in health and disease. Behav Res 56, 80–92 (2024). https://doi.org/10.3758/s13428-022-01938-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13428-022-01938-w