Introduction

Eye tracking is essential for the understanding of certain visual and oculomotor disorders, such as macular degeneration (Verghese et al., 2021), strabismus (Agaoglu et al., 2015; Kenyon et al., 1981), or nystagmus (Rosengren et al., 2020). While much progress has been made, these disorders add complexities to eye tracking approaches, such as large misalignment between the visual (fixational locus to object) and pupillary axes, decreased fixational accuracy and increased fixational instability, and potential misalignment of the two eyes. Some of these issues have been studied extensively in traditional, corneal reflection (glint)-based systems (Tarita-Nistor et al., 2015) and using devices that allow for ground truth measurements using retinal imaging (Shanidze et al., 2016), but interpretation of eye tracking data in these disorders is complicated and faces potential confounds. At the same time, these disorders provide a fertile research space, as much is yet to be understood about how they can impact activities of daily living and mobility, and they simultaneously provide a fascinating model of neural changes associated with the disorders (e.g., eccentric oculomotor reference, loss of visual field, eye disconjugacy).

The advent of head-based eye tracking devices allows for a wider set of research questions and studies that focus on ecological validity (by allowing gaze tracking that includes eye and head movements) and issues such as mobility. In recent years, several low-cost head-based systems have emerged, making their widespread use in laboratories and clinics possible. This expansion can allow for a more complete understanding of eye and gaze movement deficits that might occur in oculomotor disorders, where head movements may be an especially important consideration (Ehinger et al., 2019; Shanidze & Velisar, 2020). One example of such a device is the Pupil Labs Pupil Core eye tracking platform that consists of custom, open-source data collection software and a wearable eye tracking headset. The device is low-cost, low-weight, and comes with an open-source suite of software and modular hardware that can be modified to researchers’ specific needs. The device uses a glint-free, computer vision-based algorithm for eye tracking (Swirski & Dodgson, 2013) that estimates gaze direction by optimizing parameters of an eye model. The algorithm has certain built-in assumptions, such as the alignment of the visual and pupillary axes and eye convergence on the target during calibration. These assumptions, however, are often violated in individuals with oculomotor and visual disorders. In strabismus, for example, the mean eye deviation is 14.2° (Economides et al., 2015), far from the ocular alignment assumed in gaze point estimation.

In the case of macular degeneration (MD), the central retina—including the high-resolution fovea—is often damaged, and one or more peripheral preferred retinal loci (PRLs) may develop (Crossland et al., 2005). With enough time, these can become a new oculomotor reference (White & Bedell, 1990), although they may also be consumed as the disease progresses and new ones can develop. Furthermore, PRLs need not be (and rarely are) in corresponding locations on both retinas. With eccentric PRLs, individuals also tend to have greater fixational instability and less accurate oculomotor behaviors, notably saccades (reviewed in Verghese et al., 2021). These deficits may affect eye tracking calibration and fidelity, and this detriment may be variable for different types of eye trackers. For example, we demonstrated both increases in calibration error in MD participants and variations between trackers in a prior report (Love et al., 2021), where eye tracking calibration errors in MD participants were somewhat greater than in age-matched controls using the EyeLink 1000, and significantly greater using the Pupil Core eye tracking platform (the latter tested using head-restrained and head-unrestrained calibration approaches for a more direct comparison to the EyeLink). This increase in calibration error may be because MD-related oculomotor changes are in violation of the assumptions of the gaze estimation algorithm, leading to greater calibration and tracking errors. Indeed, Love et al. (2021) showed that calibration errors increased with larger fixation eccentricities. However, due to the confluence of oculomotor and vision deficit-related factors, it is difficult to say what is the exact cause. The misalignment of the visual axis and reduction in fixational stability due to eccentric fixation, difficulty in finding or resolving the calibration target, nonlinear eye placement on the calibration targets (e.g., due to multiple fixational loci used throughout calibration), or deficits in saccadic latency and accuracy could all play a role in reducing calibration accuracy. Without a ground-truth method to evaluate these factors independently and in combination, it is difficult to assess whether measured changes (compared to controls) in eye tracking accuracy and effectiveness are the result of some of the oculomotor changes associated with MD, or due to poor tracking by the device itself.

To assess eye tracking accuracy in the presence of disease-related oculomotor changes, we present here a low-cost robotic oculomotor simulator (EyeRobot) that can provide ground truth assessment of eye tracking fidelity. The EyeRobot consists of two independently controlled eyes that can rotate horizontally and vertically, and can be calibrated for eye tracking using standard eye tracker calibration routines. The robot eyes can be statically or dynamically positioned in a relatively precise manner for a range of conjugate and disconjugate behaviors, including saccades and smooth pursuit in the three-dimensional space, as well as emulating behaviors associated with visual/oculomotor deficits (e.g., eccentric fixation, fixational instability, disconjugate eye movements/fixation). The device is designed to be easily manufactured and used for validation purposes with a range of video-based glint-free eye trackers and in any setting where eye tracking with clinical populations is performed.

As a proof of concept, we use the EyeRobot to calibrate an eye tracker (Pupil Core) by emulating both central and eccentric fixation, and examine the effects of eccentric fixation on eye tracking accuracy. We find that fixation eccentricity can be compensated for by standard eye tracking algorithms in the Pupil Capture software through a rotation of the coordinate frame. Prior design of the EyeRobot was presented previously in a short form (Love et al., 2021).

Methods

We propose two versions of the EyeRobot—a custom-manufactured design optimized for rigidity and stability (Figs. 1 and 2), and a 3D-printable design for easier manufacture and assembly (Fig. 3). The results are presented for the custom design.

Fig. 1
figure 1

Custom-manufactured EyeRobot consists of two metal eye sockets supporting 3D-printed eyeballs attached to two motors each (one for the generation of vertical movement and one for horizontal movement). The sockets are mounted to a wooden frame that also has a scene camera mount above the eye assembly, and two adjustable eye camera mounts facing the eyes

Fig. 2
figure 2

Schematic of the EyeRobot eye assembly. For clarity only the left eye is shown. Right eye is the mirror replica of the left. a Front view of the EyeRobot eye assembly with parts and dimensions labeled. b Side (medial) view of the eye assembly

Fig. 3
figure 3

The 3D-printed EyeRobot design. a, b Schematics of the device and device prototype (c). Servo motors are marked in green in the schematics. The eyeballs are attached to the frame using snap-fit pivots

Design

Complete parts lists for both the custom-made prototype and the 3D-printed version are included in the Supplementary Information (Prototype: Tables AD, 3D Print version: Table E).

Hardware

Eyes

The EyeRobot eyes are 3D-printed semi-spheres, made to have similar dimensions as an adult eyeball (30 mm diameter, compared to human: 21–27 mm, Bekerman et al., 2014). The pupils consist of a 6.35 mm black circle, with a 2.5 mm aperture in the center to allow for a laser diode to be placed in the center of the eye (Supplementary Figure B). The laser diode provides a projection of where the eye is pointed, allowing external verification of the eye tracker’s estimated position. The eyes are mounted to an aluminum frame (Fig. 2), which can be pivoted to provide X-axis (left/right) motion. Each eye is attached to the frame on its horizontal axis and has an arm protruding from the back that is moved up and down to provide Y-axis (up/down) motion. The motion is driven by stepper motors, described below.

Motors

Each axis of rotation for each eye is coupled to a stepper motor (28BYJ-48, see Fig. 2). The X-axis motor is mounted below the aluminum frame, while the Y-axis motor is mounted at the top of the frame, with a metal lever connecting it through an intermediate link to the arm protruding from the eye. Each step of the motor corresponds to approximately 1/11th of a degree. The maximum rated step frequency for the motors is 100 Hz; however, it was found that modifying the unipolar 28BYJ-48 stepper motors for bipolar operation resulted in higher attainable speed and eliminated the tendency to miss steps (in our test of 4000 consecutive steps per motor, we detected zero missed steps across all four motors with the bipolar modification). To further improve the speed, the motors are powered from a 7.5-volt supply rather than the specified 5 volts.

Control board

The motors are connected to HiLetgo model A4988 Stepstick stepper driver boards that are controlled by an Arduino Uno R3.

Frame/mount

The eye/frame/motors assemblies are mounted to a wooden frame that provides a rigid platform for all of the EyeRobot components, including the control board. Two adjustable articulated mounts for the eye tracking cameras are attached to the front of the wooden frame to provide a fixed location for the eye tracker (Fig. 1).

Power supply

The EyeRobot motors are powered using a 7.5 V, ≥ 1.5 ampere standard power adapter. The lasers are powered using the Arduino’s 5-volt pin.

Circuitry

The four HiLetgo A4988 driver boards have two inputs: step and direction (dir), which are connected to eight of the Arduino’s GPIO pins (Supplementary Figure A). The Arduino provides independent signals for direction and step initiation for each stepper driver board (i.e., eight signals in total for the four boards), allowing for fully independent motion of each eye.

Alternative 3D-print version

We have replicated this manually manufactured design with 3D-printed parts, allowing for simpler and more standardizable manufacturing and assembly (Fig. 3). The 3D-printed version was modified to be built with full-size hobby servo motors which have a standard form factor and can provide additional movement functionality if needed.

Robot Capabilities and Programming

The design of the EyeRobot allows each eye to be moved independently or together along 2 degrees of freedom (X and Y; the majority of available eye trackers do not measure or calibrate torsion). The eye movement range exceeds that of the human eye-only oculomotor range (~40°, Stahl, 2001) and can move beyond the range where the eye cameras can still see the pupils, which is the limiting factor for testing.

The EyeRobot eye movements are controlled using the Arduino programming language, run on the Arduino board. To first translate between stepper motor positions and EyeRobot eye locations, for each experiment we performed a manual calibration of positioning the EyeRobot’s eyes to the correct locations while reading and saving the motor positions. We then converted the desired eye locations over the course of each experiment to their corresponding stepper motor positions. The Arduino also controls the RPM of the stepper motors, allowing for different eye movement speeds.

Eye movements are programmed as a stream of timing, motor position (representing eye locations), and motor speed that are sent from the Arduino to the four stepper motors simultaneously. This format allows the EyeRobot to be programmed to make any desired set of eye movements, with each independently controlled motor moving separately or in tandem. Thus, the eyes could be programmed to have matching or different trajectories and start in alignment or offset from each other. For more details on how the EyeRobot is programmed, see Supplementary Information.

Robot accuracy and precision testing

Several accuracy tasks were performed. For each behavior, the motor positions that aligned with the desired locations were determined, and the EyeRobot was then set to move to those locations in a pre-programmed sequence, repeated five times for the incremental steps test and 25 times for the star pattern test. For both tasks, the center location was tested additional times (35 for incremental steps and 124 for star), as that was the starting and ending location for all sequences. Position feedback was provided using the laser diodes in the two eyes and recorded using an external video camera (scene camera, see Eye Movement Recording). A physical grid at 1 m was used to map the video locations to degrees.

Incremental steps

First, we tested each of the EyeRobot eyes moving to locations 2°, 4°, 6°, 8°, 10°, and 12° from center in the horizontal and vertical directions (incremental steps, Fig. 4a). Each eye (and thus motor) was tested independently.

Fig. 4
figure 4

EyeRobot accuracy and precision testing tasks. a Incremental step task: the EyeRobot eye was moved from the origin (0,0) in incrementally (by 2°) increasing steps in the horizontal (purple) or vertical (brown) direction. b Star task: the EyeRobot eyes were moved to one of eight locations at 15° (large star, blue dots) or 3° (small, yellow dots). Example trajectories are shown with dashed arrows. All movements were initiated from the 0,0 position

Star pattern

We also tested a large (15°) and small (3°) star pattern (4 cardinal and 4 oblique directions, Fig. 4b). Each location was tested 25 times. The eyes were programmed to move to each location in tandem to test the simultaneous operation of several (2 for cardinal directions and 4 for oblique) of the motors.

Analysis

Video data were saved for analysis. For the star pattern test, target locations were identified using Pupil Player and marked as reference locations. Eye positions were detected manually for each location and marked in the scene by an experimenter. Location coordinates (in pixels) were then saved for comparison. For the small star task, a subset of landings for each location was analyzed: the first 28 at the center location and the first seven for all but the two horizontal locations (N = 6).

Eye tracking

Eye tracking was performed using the Pupil Labs eye tracking cameras (120 Hz) mounted in front of the robot (Fig. 5) such that cameras could be adjusted analogously to the Pupil Core design. The scene camera was adapted from a Dell OptiPlex 7440 all-in-one desktop computer (camera part number 1C4W1 01C4W1 CN-01C4W1) and sampled at 30 Hz. The camera was mounted 83 mm above the pupils, centered between the eyes, with the front of the camera lens 8 mm back from the front of the eyeballs. The camera could be rotated vertically (in pitch), analogously to the Pupil Core design (Fig. 5).

Fig. 5
figure 5

Eye and scene camera placement on the EyeRobot. Scene camera is mounted above the eyes on a metal beam. The eye cameras are mounted on articulated arms in front of and below the eyes

Eye cameras were positioned to capture the entire range of eyeball motion on the calibration task, and Pupil Capture software (Pupil Labs, Munich, Germany) was used to calibrate and record eye tracking data. In the first experiment, only one eye was tracked. Due to the open, unobscured nature of the robotic eyeballs, the eye cameras’ built-in infrared illuminators were dimmed using masking tape wrapped around the illuminator portion of the camera for better pupil detection. All recordings were done under sufficient illumination for scene camera images to be clearly visible for future analysis.

Prior to calibration, the eyes were set to move in a large circular pattern (large enough to cover the calibration area) and three to four of these eye rotations were used to inform the eye model optimization routine. The built-in Pupil Core eye model optimization algorithm finds the center and radius of a 3D sphere that represents the eyeball, and estimates the pupil using a circle that rotates tangentially to the sphere such that the 3D projection of the circle on the 2D eye camera image plane is consistent with the ellipse size and orientation calculated by the pupil detection algorithm. The algorithm does this for each instance of pupil detection during the eye rotation motion. The projection of the sphere back on the eye image camera is then used as feedback on the model fit (Swirski & Dodgson, 2013). Once a good eye model fit was achieved, the model was fixed for the rest of the experiment. Interestingly, the best fit models reported pupil diameters of maximum 5 mm (depending on the skew of the pupil ellipse, the pupil diameter varies with larger diameters reported for near-round ellipses), while the actual pupil size was 6.35 mm. This misestimate may be at least partially due to known errors in pupil size estimation using video-based eye trackers, such as the pupil foreshortening effect (Hayes & Petrov, 2016; Petersch & Dierkes, 2021), where experimental geometry such as the eye camera distance and gaze angle may lead to the underestimation of the pupil size.

Calibration

A nine-point calibration grid (Fig. 6) was used to calibrate the device. The laser beam from each eyeball was used as feedback to where the optical axis of the eye would be oriented. Calibration was performed using the “Physical Marker Calibration” option in Pupil Capture software (Pupil Labs, Berlin, Germany). Markers were printed targets provided by Pupil Labs (v. 0.4, Fig. 6) for easy identification by the Pupil Labs software. The recorded data were processed in Pupil Player and accuracy and precision were calculated using the calibration set designated as “validation.”

Fig. 6
figure 6

Calibration marker and grid used for eye tracker calibration and validation. Bullseye targets used during the experiment are shown at target locations. Red dots represent gaze locations for the corresponding target for central (left), 5° eccentric (center), and 11° × 12° eccentric (right) calibrations. Example target-gaze correspondences are shown with blue arrows

In the first set of experiments (monocular), stimuli were shown at 0.5 m from the robot. The calibration marker size was optimized for detection by the Pupil Capture algorithm. In the second set of experiments (binocular), markers were shown at a distance of 1 m. Calibration (and validation) markers were adjusted to be 2.5 cm (1.4° vis angle) to be identifiable by the scene camera. The calibration field size was 16° W × 14° H and validation field size was 12.5° × 12.5°.

In each experiment, the EyeRobot was programmed to make eye movements to focus on each of the nine calibration points in the order required by the calibration routine. The accuracy of eye movements was verified by ensuring the laser from the eyes were on the displayed calibration targets.

Calibration with eccentric fixation

Two eccentric fixations were used: a 5° horizontal offset and a 11° horizontal + 12° vertical offset (Fig. 6). These offsets correspond to the laser position for a central target. For example, for the 5° offset, when the target was shown in the center, the EyeRobot was positioned such that the laser was at 5° to the right of target. The eccentric position for each calibration location was determined geometrically and marked with a small dot as reference to verify that the EyeRobot position was correct. This offset was maintained for all target locations of the nine-point grid (Fig. 6 center and right panels). The experiment was performed binocularly and monocularly. Here we report the monocular data. The nine-point grid sequence was repeated at each eccentricity and used as a validation set for calculating the accuracy and precision.

Saccades and ramps

To test the artificial eye’s ability to be consistently tracked by the eye tracker during standard oculomotor behaviors, we also tested large and small saccades, and slower-velocity (12°/s) ramp movements (smooth pursuit). For saccades, the EyeRobot made 24 saccades in eight directions (four cardinal and four oblique) each with two saccade amplitudes: 15° and 3° (same configuration as Fig. 4b). For ramps, the EyeRobot made constant velocity horizontal movements at 2°/s. The movements were −10° to 10° of center, for a total displacement of 20°.

In each experiment, we used the Pupil Labs eye tracker to record the eye movements of the EyeRobot, and compared the measured movements with the programmed movements.

Data analysis

Data were analyzed using the built-in Pupil Player functionality. Additional analyses were performed using MATLAB (MathWorks, Inc., Natick, MA, USA) and Prism (GraphPad Software, Inc., San Diego, CA, USA). Normality was tested using the Kolmogorov-Smirnov test.

Results

Robot accuracy and precision

Steps

EyeRobot eye landing locations and variability are included in Table 1 and are visualized in Fig. 7. For the step task, the largest EyeRobot error was seen in the left eye, vertical motor, and was 0.22° (4° target). For all motors, the maximum error ranged between 0.16° (right eye, horizontal) and 0.22°. Accuracy of 0.22° is well within the accuracy of video-based eye trackers, including the EyeLink 1000—an industry standard with accuracy of 0.57°—and the Pupil Core, which has an estimated accuracy of 0.82° (Ehinger et al., 2019).

Table 1 Position and variability of the left and right eye for each target location. Note, for 0° target location, N = 35. For all other locations, N = 5.
Fig. 7
figure 7

EyeRobot accuracy and precision for horizontal and vertical eye rotations of 2°, 4°, 6°, 8°, 10°, and 12°. Middle panel shows eye landing locations for each target location, for each motor (horizontal and vertical, designated by color and shape in legend). Points in the middle panel have been offset by 0.2° in x and y for visibility. Side panels show mean eye position error at each location marked in the middle panel for each rotation direction/motor (filled bars: horizontal, open bars: vertical; error bars are SEM); left panel: left eye, right panel: right eye

The EyeRobot precision (measured as the standard deviation of the landings for each target location) ranged between 0° and 0.18°, with the largest variability seen for the right eye vertical motor, which overall had the poorest precision. The other motors’ precision was at or below 0.1°. We believe that this difference in performance for different locations is likely due to slight variations in the motors themselves, which are subject to the imperfections of the manufacturing process. The best-case precision of the Pupil Core device has previously been reported at 0.12°, while precision of the EyeLink 1000 was reported at 0.023° in the same study (Ehinger et al., 2019).

Star

Similar analysis was performed for the star configuration (Methods, Section 2.3.2). For this motion, we examined the movement of the motors in tandem. Thus, we analyzed the combined gaze from the two eyes (the laser point overlap). The errors are summarized in Tables 2 and 3 and are illustrated in Fig. 8.

Table 2 Horizontal and vertical position error for all target locations of the 15° star
Table 3 Horizontal and vertical position error for all target locations of the 3° star
Fig. 8
figure 8

EyeRobot performance on the star task. a EyeRobot combined eye landing positions in the scene camera for the 15° (red) and 3° (green) star tasks. Each green and red symbol represents a single landing of the EyeRobot gaze on the reference location (corresponding grey dot). b EyeRobot gaze position errors in the horizontal and vertical directions across target locations for the 15° task (error bars: SEM). The red dots on the x-axis represent the corresponding target locations where the gaze position error was calculated

Note, for the (0,0) location, N = 124. For the (0,15) location, the lasers did not completely overlap on the landings. Since the two could not be disambiguated, both measurements have been included in the average (N = 48)

Note, for the center (0,0) location, N = 28, N = 7 for all but the two horizontal locations, where N = 6

The accuracy and precision of the EyeRobot were comparable on this task to that of the incremental step task.

Eye tracking

As mentioned in methods, three calibration experiments were performed in which the offset between the optical axis (laser beam) and the visual axis was increased to emulate normal (no offset) and eccentric fixation (Fig. 6).

Central fixation calibration

In the first experiment we modeled an alignment between the visual and optical axes of the eye. This strategy emulates the assumptions of the Pupil Core eye tracking algorithm. As expected, calibration accuracy (0.5°) and precision (0.1°) were quite high (Fig. 9a).

Fig. 9
figure 9

Monocular eye tracker calibration with the EyeRobot eye with different ocular eccentricities. The accuracy and precision values listed are those reported by the Pupil Labs software after validation. Top row: Eyeball positions for each calibration grid target fixation (laser position with or without offset). Green circle: the eye model fit; red dot: detected pupil center, red circle: pupil ellipse used to determine gaze direction (Swirski & Dodgson, 2013). Bottom row: Composite images of estimated gaze locations in the scene camera image. Yellow circle with turquoise center dot: the point of regard measured by the eye tracker. Laser light is marked on the calibration grid as a red dot (the raw versions of the calibration grid images are available in the Supplementary Information, Figure C for reference). a No offset calibration: the optical axis of the eyeball is aligned on the target. b 5° rightward offset of the optical (red laser dot) axis. c Large offset (11° right horizontal and 12° up vertical). The laser is aligned with the cross at the top of the reversed L-shaped paper target, with the marker placed on the bottom left edge illustrating the offset. The visible variability in the orientation of the paper target at each position likely added slight variability to the bias and may thus have led to a less accurate measurement

Eccentric Fixation Calibration

In the second experiment, we increased the offset between the optical and visual axes to 5° horizontally to the right, consistent with offsets measured between the optical and foveal axes in healthy adults (Basmak et al., 2007). As seen in Fig. 9b, after calibration the measured gaze is aligned with the marker. This suggests that the fixed bias in eye rotation should not impact the accuracy and precision of the measurement. The eye tracker calibration procedure was able to compensate for the fixed bias in eye position and validation-reported accuracies of 0.4° and precision of 0.1° (Fig. 9b).

We subsequently increased the offset further to 11° right horizontal and 12° up vertical, corresponding to the laser beam location relative to the calibration target (Fig. 9c). This offset was chosen as representative of an MD patient with a large central scotoma (>32.5° in diameter) and fixation eccentricity of ~16°. Calibration accuracy was measured at 0.6°, and precision remained at 0.1°. Overall, there was only a small change in measurement accuracy with increased eccentricity, suggesting that the eye tracking algorithm is able to compensate for a fixed bias across all calibration targets.

Saccades and constant velocity ramps (smooth pursuit)

Figure 10 illustrates the eye tracking data for the star accuracy experiment in Section 3.1.2, Fig. 8 (green triangles). The traces in Fig. 10c highlight the consistency of the EyeRobot’s simulated saccade-like movements.

Fig. 10
figure 10

Raw eye tracking model (a, b) and data for 3° saccades in eight different directions (c) and horizontal velocity ramp (a). a Model of the tracked eyeballs in the corresponding eye camera’s coordinates (red, blue, green axes at below the corresponding eyes) with the gaze direction vector (pupil normal) marked (left eye in blue, right eye in red). b Projection of the pupil normal unit vector (orange) onto the scene camera coordinate frame (x: blue, y: green, z: red). The tracker software estimates the direction of the pupil normal unit vector (gaze direction), which is then separated into the x, y, z components shown in c and d, where the graphs’ y-axes are unitless, showing the frame-by-frame projection of the pupil normal unit vector. c Pupil normal vector projection time series for right and left eye of the EyeRobot in the normalized eye camera coordinates. Saccade trajectories are marked in the legend at the top and order is marked above the traces. d Vector time series for the right and left eye of the EyeRobot performing constant velocity ramps (smooth pursuit). Note the similarity of each cycle across the whole time series. Rows: x, y, and z axes. Note: motion is evident for all three axes due to the orientation of the eye cameras relative to the eye in 3D space

The stability of the EyeRobot’s velocity is further confirmed in Fig. 10d, showing a series of constant velocity movements at 12°/s (2 RPM). The change in position is constant and continuous across the repeated movements during the trial. For both movements, the eye tracker was able to track both eyes throughout the experiment.

Discussion

In this paper we propose a low-cost robotic oculomotor simulator, EyeRobot, specifically designed to emulate both the eye movement behaviors in healthy individuals and those known to occur in visual and oculomotor disorders. EyeRobot provides a ground truth reference for validating the eye tracking calibration accuracy of video-based, glint-free eye trackers in the cases where eye movements do not match the assumptions based on healthy individuals. In this initial deployment, we simulate calibration performed under “healthy” conditions of central fixation, absolute fixational stability and appropriate eye alignment; and “unhealthy” condition of eccentric fixation that commonly occurs in disease, such as age-related macular degeneration.

The robot’s design allows for complete and independent programming of the movement of each eye. This design thus is capable of emulating other oculomotor changes beyond eccentric fixation. While prior work has demonstrated the use of laser-guided artificial eyeball for eye tracker validation in a static eyeball (e.g., Hayes & Petrov, 2016), the EyeRobot provides dynamic functionality that can capture deficits in a wide range of oculomotor behaviors.

EyeRobot performance

We tested the EyeRobot functionality in several ways. First, we tested the accuracy and precision for each axis of rotation and eye separately. This approach allowed us to understand the individual motor performance of the assembled device. We found the accuracy error to be smaller than the resolution limit of most video-based eye trackers. Subsequently, we tested binocular motion along the cardinal and oblique directions. This approach allowed us to examine the simultaneous operation of several (up to all 4) motors. Again, we found the EyeRobot accuracy and precision to be quite high, though future testing with different color eye diodes that would allow for simpler demarcation of the two eyes could be useful to examine variations in accuracy and precision across target locations for each eye.

Eye tracking performance with the EyeRobot

We used the Pupil Core eye tracking software and hardware to track EyeRobot’s eye movements. We found that the eye tracker was able to detect the artificial pupil and estimate an appropriate eye model of the eye. We did find that pupil size was underestimated in the model, which could lead to estimation errors of the 3D gaze position. This underestimate did not affect accuracy and precision estimated by the Pupil Labs software in the scene camera image. In addition to several calibration and validation tasks, we were able to successfully track binocular saccades and smooth velocity eye movements made by the robot.

Eye tracking with eccentric fixation

In addition to experiments with central (“healthy”) fixation, we also performed two calibration experiments where the EyeRobot was set to emulate eccentric fixation (where the visual and optical axes do not align). We found that regardless of the fixation eccentricity, the eye tracking algorithm was able to compensate for the offset and similar calibration accuracies were achieved. This finding is consistent with the eye rotation estimation algorithm used by Pupil Core, where the eye rotation vector coordinate frames are rotated relative to each other so that they are aligned in the scene camera space. In our experiment, we introduced a constant offset between the target locations and the orientation of the visual axis. Our findings suggest that the size of this constant offset can be compensated for by the algorithm with the rotation of the eye camera coordinate frames relative to the scene camera. In other words, in the eccentric viewing condition, the eye camera is treated as having an additional rotation that corresponds to the offset. Thus, the coordinate frames can be adjusted to account for visual axis misalignment in MD. As decreases in tracking accuracy in macular degeneration have previously been reported (Love et al., 2021) with increasing fixation eccentricity, this outcome suggests that additional behaviors associated with foveal loss and eccentric fixation, such as fixation instability and use of multiple fixation loci, must be modeled independently and in combination to better understand the sources of eye tracking errors reported previously. Further, the EyeRobot can be used to develop and improve calibration algorithms that take into account larger fixation instabilities that may also be spread out over more elongated or irregular fixation regions (Shanidze et al., 2016).

Limitations

The EyeRobot tested is a custom-built device that requires tools and expertise for the manufacture of precision-built metal parts and may pose construction challenges in some settings. To simplify the process, we also developed a 3D-printed design. Using the standard, easily printed components, such a design would allow this device to be deployed in many locations without the need for machine shop or special tool access.

While the current device uses red laser diodes in both eyes, for future iterations we suggest the use of different color laser diodes in the two eyes for easy disambiguation between each eye’s position.

The device is designed to be low-cost for easy replication. However, as can be noted in Fig. 7 and Table 1, variation across the motors is possible. Higher accuracy and precision, as well as better motion control, may be achieved using robotics-specific servo motors (e.g., Dynamixel Robot Servos). Such design would increase the cost by approximately fourfold, but may be warranted depending on the specific application. We deliberately provide a highly modular hardware design that can easily be modified for other motor form factors and specific experimental needs.

Future directions

Given the flexible nature of the EyeRobot’s design, the device can be used to model several oculomotor deficits that may affect eye tracker calibration and accuracy, such as a large deviation between the eyes seen in strabismus or directional fixational instability in nystagmus. The device can work with any glint-free video-based eye tracker or post hoc eye tracking algorithm, including as a testing platform for custom-built prototypes. More broadly, video-based eye tracking is subject to a range of environmental, participant and experimenter, and methodological factors that can affect eye tracking accuracy and performance (Holmqvist et al., 2022). The EyeRobot can provide researchers with a means to assess the degree of noise that these different factors may contribute. For example, the device could be used for determining the effect of specific lighting conditions for the oculomotor behavior and range of interest or, with appropriate camera mounts, the effects of ambient vibration on camera movement relative to the eye can be assessed in the context of specific eye movements being investigated (magnitude, horizontal vs. vertical motion, etc.).

Conclusions

The EyeRobot provides a simple and easily configurable method to obtain ground truth measurements for eye tracker calibration and validation. While this device can be useful for a number of applications under healthy eye tracking, it is particularly instrumental in the cases of oculomotor dysfunction where care must be taken to disambiguate behavioral changes due to disease from eye tracker performance decrement due to the eye tracking algorithm being unable to accommodate behavioral changes.

We suggest that the EyeRobot could further be used to develop more robust eye tracking algorithms that are able to accommodate eye tracking of individuals with visual and oculomotor abnormalities.