Introduction

Humans effortlessly reach for and manipulate objects in the environment, a skill that strongly depends on vision of the objects in question and, to a lesser degree, vision of the hand. The eye–hand coordination required in such tasks has been investigated from several disciplinary perspectives including neurophysiology, biomechanics and experimental psychology, reflecting the multifaceted nature of its underpinnings. The central question in research on eye–hand coordination is how visual information about the target is transformed into motor commands for the arm and hand (e.g., Baraduc et al. 1999; Bullock and Grossberg 1988; Buneo and Andersen 2006; Crawford et al. 2004; Flanders et al. 1992; Sober and Sabes 2003, 2005). A key issue in this context is how perceptual and motor variables are coded in the brain at different stages of the visuomotor transformation.

Studies on eye–hand coordination have provided evidence for a prominent role of gaze direction at different stages of the visuomotor transformation. For example, the motor performance has been found to depend on gaze direction in a variety of tasks (e.g., Mrotek and Soechting 2007; Neggers and Bekkering 2000; Roerdink et al. 2008; Soechting et al. 2001). Furthermore, a large body of research suggests that the visual memory of stationary spatial targets is represented and updated in a gaze-centered reference frame (Henriques et al. 1998; Khan et al. 2007; McIntyre et al. 1997, 1998; van den Dobbelsteen et al. 2004; van Pelt and Medendorp 2007; Vetter et al. 1999). Such gaze effects have also been reported for auditory spatial targets (Pouget et al. 2002) and perception of heading direction using (neck) proprioception (Ivanenko et al. 2000). These behavioral observations are complemented by neurophysiological reports of neural activity in brain areas involved in visuomotor transformations (Andersen et al. 1985; Batista et al. 1999; Buneo et al. 2002; Lacquaniti et al. 1995; Medendorp et al. 2003, 2005; Pesaran et al. 2006).

Many real-life tasks involve non-stationary objects (e.g., avoiding an approaching car or catching a baseball). Such tasks are of interest for motor control theory because they are more stringently constrained by the environment than simple point-to-point reaching movements (Beek et al. 2003). We previously studied a right-handed catching task involving three-dimensional curvilinear ball trajectories, and suggested, among others, that gaze direction played a prominent role in these movements. In our task, the hand could be moved along a lateral bar, which the balls passed at a position that could not be predicted by the participants before the start of the trial. The hand movements showed a distinct leftward bias (Fig. 1); for instance, when the hand started at the ball’s future passing position (unbeknownst to the participant), it was first moved leftward (i.e., away from the starting position), after which its movement direction was reversed to catch the ball at the starting position (see also Jacobs and Michaels 2006). These results suggested that the movement vector for interception might be coded in gaze-centered coordinates (as explained below), while it apparently is not based on an accurate prediction of the ball trajectory but updated during execution (see also Montagne et al. 1999; Peper et al. 1994).

Fig. 1
figure 1

Schematic illustration of the leftward bias in hand movements observed for lateral catching (Dessing et al. 2005). The upper part of the graph (dotted lines) shows the lateral ball position as a function of time [time to contact (TTC), running downward] for balls moving on an outward and inward approach trajectory. The horizontal line represents the moment of interception. The lower part shows the corresponding lateral hand movements for three starting positions (TTC, running upward). The black thin and thick lines represent hand movements for inward and outward ball trajectories, respectively. For reference, the grey line shows a minimal-jerk movement (Flash and Hogan 1985) from the initial hand position to the interception point with the same movement time as the catching movements. All hand trajectories lie to the left of the reference trajectory, reflecting the leftward bias in the hand movements. This bias is towards the gaze direction (i.e., the eye is positioned at X = 0), assuming that people continuously gaze at the ball during normal catching

Capitalizing on the vector integration to endpoint model (Bullock and Grossberg 1988), we formulated a dynamical neural network model to account for our experimental observations. This model includes a continuous visuomotor transformation in that movement commands are generated online, for which it uses separate ball position and velocity signals (Dessing et al. 2002, 2005). Interestingly, the motor commands generated by the model did not rely on explicit predictions of the ball’s passing position (i.e., the interception point, IP), in contrast with typical implementations of robot interception (e.g., Riley and Atkeson 2002). The model accounts for the leftward movement biases based on two effects, namely through biases arising within the representation of target motion and through modulations arising within the transformation of movement commands into body-centered coordinates (Fig. 1; see next paragraphs). The present study was designed to elucidate the gaze direction-dependency of the latter.

Figure 2a depicts a schematic of the origin of movement biases as implemented in the model. Firstly, no direct influence of the ball’s visual acceleration on the movement commands is implemented other than through movement updates; consistent with perceptual studies (e.g., Craig et al. 2005; Trewhella et al. 2003; Werkhoven et al. 1992) and other studies on interception (Brouwer et al. 2002; Port et al. 1997; but see Zago et al. 2009), the model only uses visual target position and velocity signals (see also Smeets and Brenner 1995). Not taking the visual acceleration into account means that at any instant the future visual ball displacement is underestimated in case of acceleration (and overestimated in case of deceleration), which biases the effective target position and thereby the hand movements. This effect arises within the representation of target motion, which may be considered the early stage of the visuomotor transformation. The movement vector in the model is coded in a coordinate frame that rotates with the gaze direction; a leftward bias arises when these vectors are transformed into a body-centered (Cartesian) coordinate frame, because only the lateral gaze-centered component is taken into account. Because this means that movements are essentially planned towards the gaze direction, only the visual acceleration in this direction influences movement biases. Not taking this acceleration into account in body-centered coordinates yields an underestimation of the future rightward ball displacement, which explains the leftward direction of the movement biases.

Fig. 2
figure 2

Illustration of the origin of movement biases in the visuomotor transformation implemented in the catching model of Dessing et al. (2005), schematically presented in a. A bias arises within the representation of target motion (\( \dot{T} \) in a), which is generally underestimated, for instance because visual ball acceleration is not (fully) taken into account. In bd this is represented as a black arrow from the ball, decomposed into Cartesian gaze-centered components along and perpendicular to the gaze direction (\( \ddot{X}_{ball - GC} \) and \( \ddot{Y}_{ball - GC} \), grey arrows from the ball). In the model, the planned movement vector (HT from hand [H] to target [T] in a) at any instant in time points in the direction perpendicular to the gaze direction; the hand velocity vector (\( \dot{H} \) in a) is thus influenced by the target motion bias along this direction (in grey arrows from the hand). This gaze-centered movement bias influences the hand movements in body- or world-centered coordinates (i.e., along the lateral bar) resulting in the observed movement bias (black arrows from the hand). bd present this analysis for the situation when the ball is pursued, when fixating forward, and when fixating rightward, respectively

We recently performed an experiment that supported the suggestion that movement biases arise in the representation of target motion (Dessing et al. 2009). This experiment involved occlusion of the ball in different phases of its approach, with the occlusion conditions presented either in random order or in blocks. A smaller leftward movement bias was observed when the initial part of the ball trajectory was occluded, but only during the randomized presentation. Moreover, during the blocked presentation the leftward movement bias was smaller without occlusion than with late occlusion. While these effects do not prove that target acceleration is not taken into account altogether (cf. Bennett et al. 2007), they are consistent with the hypothesis that the future rightward ball displacement is underestimated, as would be expected if the gaze-centered lateral visual acceleration is not fully taken into account. The dependence on presentation order was interpreted to reflect strategies used to correct for this deficiency. The present study examined the validity of the modeled modulation of movement biases within the visuomotor transformation. Because this effect depends on gaze direction, we examined the effect of fixation direction on catching movements.

Figure 2b–d schematically illustrates how, according to our model (Dessing et al. 2005), movement biases are modulated by fixation direction (b: gaze on the ball; c: fixate forward; d: fixate rightward). The dark grey arrows from the ball indicate the gaze-centered components of the ball’s acceleration, which is not directly taken into account within the representation of target motion. Because the modeled visuomotor transformation is based on only the lateral gaze-centered component, movement commands are biased in a direction opposite to this component of the acceleration (grey arrow from the hand, perpendicular to the line of gaze). The actual movement bias in body-centered coordinates is the projection of this gaze-centered bias onto the hand movement-axis (black arrow from the hand). Figure 2c shows that fixating forward results in symmetric movement biases. Similarly, Fig. 2d illustrates the increased leftward movement bias caused by rightward fixation.

In testing these qualitative model predictions we recognized that fixation may influence the representation of target motion in ways currently not implemented in the model. During fixation, the ball is located in the retinal periphery, which may introduce small movement biases through exaggerations of retinal eccentricity (Bock 1986; Enright 1995; Henriques et al. 1998; Henriques and Crawford 2000). In addition, perceived target velocity is reduced in the periphery (e.g., Brooks and Mather 2000; Johnston and Wright 1986; Tynan and Sekuler 1982). To assess the impact of these effects on the predictions provided in Fig. 2, we simulated our catching model with all combinations of these effects (Appendix). Even though both effects may substantially influence the predicted hand movements, these simulations confirmed the validity of our prediction that hand movements should be biased leftward when fixating to the right compared to when fixating forward, due to the hypothesized gaze direction-dependent visuomotor transformation.

While the present experiment was motivated from a particular catching model, its scope is broadened by the fact that the effects of fixation on tasks requiring dynamic visuomotor transformations have not been investigated before. For instance, before conducting this study, we did not know whether participants could actually perform the task. Recordings of gaze direction revealed that only two-thirds of our participants could maintain fixation adequately enough to be included in the study. In the remaining participants, catching performance was slightly reduced compared to that typically observed for unconstrained viewing. Importantly, the spatial features of the hand movements were unaffected by fixation direction. This is inconsistent with the gaze direction-dependent visuomotor transformation implemented in our catching model. While the results were somewhat clouded through considerable inter-individual variability, this suggests that the movement biases observed here and before predominately arise within the representation of target motion.

Methods

Participants

Twelve right-handed participants (six male, six female, mean age 24.5 years, range 19-52; mean handedness quotient 96.6, Oldfield 1971) volunteered to participate in the experiment. They reported normal or corrected-to-normal vision (stereo-acuity < 40 arcsec; Stereo Fly Test, Titmus Optical Co., Inc., Chicago, Illinois) and gave their informed consent before participation. Directly prior to the main experiment, participants also participated in a catching experiment involving visual occlusion (Dessing et al. 2009).

Experimental set-up

The experimental set-up was largely the same as that used by Dessing et al. (2005, 2009). Participants sat in a chair, while catching approaching balls passing them on the right (see Figs. 3a, 4 for an illustration of the ball coordinates, with x being positive in rightward direction, y in forward direction, and z in upward direction). They could move their right hand in the lateral direction only, along a horizontal bar positioned at shoulder level (i.e., a Velcro band strapped around the bar was connected to another band around the wrist). Balls (diameter 8 cm; mass 0.145 kg) approached the participants along one of eight trajectories that were defined by two initial ball positions (IBPs, 25 cm apart, referred to as near and far; the near IBP was located 32.5 cm to the right of the center of the chair) and four interception points (IPs, 15 cm apart, referred to as IP1, IP2, IP3, and IP4, respectively; IP1 was located 22.5 cm to the right of the center of the chair; Fig. 3a). The forward and vertical coordinates of the IPs relative to the bar were determined based on pilot measurements of participants holding a ball stationary at the IPs; on average, the balls were held 7.5 cm in front of and 7.0 cm above the horizontal bar (which were thus defined as the forward and vertical coordinates of the IPs). The center of the bar was 1.07 m above the ground and, on average, 17.5 cm in front of the eyes. The hand started at one of three initial hand positions (IHPs, 15 cm apart, located in between the IPs, referred to as near, middle, and far; Fig. 3a).

Fig. 3
figure 3

a Top-view of the experimental conditions used in the present study, showing the configuration of the initial ball positions (IBPs), interception points (IPs), initial hand positions (IHPs), fixation points (FPs), and suspension points of the wires. b Illustration of the suspension mechanism of the balls

Fig. 4
figure 4

ac represent the x-, y-, and z-coordinates of the balls (i.e., X ball, Y ball, and Z ball), respectively, for the eight different ball trajectories as a function of the time remaining before contact (TTC). (Note that ball motion in the yz-plane did not differ between the trajectories.) d The lateral (θ) and vertical (ψ) visual coordinates (Fick angles) for the eight ball trajectories, calculated from the observation point, located in coordinate (0, 0, 0)

Balls were suspended from the ceiling (at a height of 3.30 m) using plastic coated steel wires (length: 2.50 m, diameter 0.2 mm) with a little magnet at the lower end. A metal plate was attached to the ball with a 5 mm long Kevlar wire. Prior to ball release, the coated steel wire was attached to a screw embedded in the ball (Fig. 3b), and the ball was suspended at one of the IBPs (2.04 m high, 3.5 m in front of the IPs) by connecting the metal piece to an electromagnet. When the ball was caught the magnet usually detached from the ball due to the impact. The glow-in-the-dark balls were charged using three UV-emitting fluorescent tubes, two suspended at about 15 cm below the IBPs and one behind the participant. A fixation point in the form of a 1 cm2 red light emitting diode (LED) was presented on a wall directly behind the IBPs (height: 1.52 m). Depending on the fixation condition, it was placed either forward of the middle of the chair or 0.94 m to its right (i.e., at an average lateral visual angle of 14°). Besides this LED, the tubes, balls, and control computer were the only light sources present during the experiment; light from the control computer was blocked from the participant by means of a screen. Participants could just see their hand when specifically asked to look at it, but they reported not seeing it during the actual trials.

An Optotrak 3020 camera system (Northern Digital, Inc., Waterloo, Canada), positioned 2.6 m on the right of the subject at a height of approximately 2 m, registered the position (at 250 Hz) of a marker placed on a piece of Polystyrene taped to the back of the hand. The Optotrak recordings were triggered at the moment of ball release. The Optotrak system was also used to measure the flight times (1,182 ms, accuracy 2 ms) after the experiment with a camera placed perpendicular to the lateral axis through the IPs, using the occlusion time of a marker placed just behind the ball’s passing position.

In a separate session (run about 1.5 years after the initial experiment) we examined whether our participants could accurately maintain fixation during our task. Binocular gaze direction (pupil-corneal, 250 Hz) was recorded using an EyeLink II system (SR Research Ltd., Osgoode, Ontario, Canada) attached to a head band. At the moment of ball release a synchronization pulse was sent to and recorded by the EyeLink computer. Head orientation was measured using Optotrak (at 250 Hz), through three markers on the head band. We also recorded the position (using an Optotrak pointer) of six points close to the eyes together with the head markers, for reconstruction of the 3D eye-in-head positions during the trial using only the head markers. This session involved a random selection of 12 conditions from the initial experiment for each fixation condition (i.e., 24 trials in total, which differed over the participants).

Procedure

Participants were instructed to catch the approaching ball with their right hand and were free to move their head and body while doing so. They were informed when the next trial was about to start by the instruction to fixate at the LED. Balls were released 500-1,500 ms (randomized) after the experimenter pressed a key. When the trial ended, participants were instructed to move their hand to a comfortable position. A screen was placed in front of the subject to prevent him or her from predicting the conditions of the upcoming trial (i.e., from vision of the experimenter suspending the next ball). The ball was attached to the wire and suspended at the IBP corresponding to the new trial. Another ball was suspended at the other IBP to prevent the use of foreknowledge of the ball trajectory. Subsequently, the experimenter manually guided the hand of the subject to the new IHP. All conditions (two fixation directions × two IBPs × four IPs × three IHPs) were presented twice, resulting in a total of 96 trials. The two fixation conditions were presented in blocks, the order of which was counterbalanced across participants. Within these blocks, trials were presented in random order. Before each block, eight practice trials were run, in order to familiarize the participants with the task, and particularly with the fixation condition. During the experiment, the experimenter visually checked whether the participants kept fixating during the trial and participants were also asked to directly report if they accidentally broke fixation at any time during the trial. The few trials in which they indicated that fixation was broken were repeated at the end of the fixation block. After the experiment, participants were asked to judge the number of IPs and IHPs used in the experiment. Running the experiment took about 90 min.

The examination of gaze direction started with the built-in calibration procedure of the EyeLink system (nine point grid, calibration accuracy ~ 0.6°). For two participants the calibration was repeated during the session, because of possible camera movement relative to the head. After each calibration, we recorded the position of six reference points on the head, as well as a reference head position (when fixating forward). The experiment started after about five practice trials; an additional practice trial was run when the fixation direction changed. All other procedures matched those of the original experiment. During the session, participants were frequently asked to assess their own fixation performance. While some participants indicated that the task was difficult, all believed they maintained fixation. After the experiment, we once more recorded the reference head orientation. This session lasted about 45 min in total (60 min when an additional calibration was required). All procedures were approved by the Ethics Committee of the Faculty of Human Movement Sciences of VU University Amsterdam.

Data reduction

Both successful (i.e., trials in which the ball was caught; n = 592) and unsuccessful trials (n = 146) were included in the analyses. Missing values in the hand marker position were reconstructed using a cubic spline interpolation (i.e., maximally 25 consecutive samples). The position signals were low-pass filtered using a fourth-order recursive Butterworth filter (cut-off frequency: 10 Hz). In total 30 trials had to be excluded, because the hand was not positioned correctly by the experimenter, or because the wrong wire was attached to the ball (for all conditions at least one good trial was retained). The lateral position of the hand marker was used to calculate three dependent variables, as outlined in the following.

While our model predictions only concerned spatial biases in the hand movements, we deemed it informative to also report results on the timing of movement initiation, if only for comparison with previous findings. Movement initiation was defined as the moment at which the absolute lateral hand velocity exceeded 2% of the first velocity peak that was larger than 0.05 m s−1. The analyses of movement biases throughout execution focused on three variables, describing early, final, and average biases in the hand movement, respectively. The early movement bias (ΔX h-early) was defined as the hand position at 200 ms after initiation (or at contact, if movements were initiated less than 200 ms before interception), expressed relative to the position of a minimal-jerk trajectory (Flash and Hogan 1985) from the hand position at initiation to the IP at that moment. Comparison to this reference trajectory was applied in order to minimize the effects of variations in movement time and movement distance on ΔX h-early. The final movement bias was defined as the constant error of the hand position at interception (CE HPI), which is the lateral hand position at interception relative to the IP (i.e., CE HPI > 0 indicates a final hand position to the right of the IP). The average bias over the entire movement from initiation to interception (average movement bias, ΔX h-av) was defined as the average lateral hand position in this period relative to a position exactly in between the hand position at initiation and the IP.

In total three trials recorded during the fixation assessment (all for different participants) were not used in the analyses (due to technical or procedural errors). To reconstruct gaze direction, we used the average of the unfiltered left and right eye-in-head angles outputted by the EyeLink system, in combination with the unfiltered head orientation and eye positions as measured using Optotrak. The latter was used to correct for small deviations in the required fixation direction due to lateral head translation (for instance occurring for relatively short participants, when the hand occupied a more eccentric position). The average of the reference head positions recorded before and after the session was taken as the zero orientation of the head.

Statistical analyses

All dependent variables were examined using repeated measures analyses of variance (ANOVAs; P < 0.05) with within-subject factors Fixation (forward and rightward), IBP (near and far), IP (IP1, IP2, IP3, and IP4) and IHP (near, middle, and far). When the sphericity assumption was violated Huynh-Feldt corrected degrees of freedom were applied. Paired-samples t tests were used for post hoc analyses, with Sidak step-down-adjusted P values for each test of main effects (i.e., for an A × B interaction, separate adjustments were made for tests of the effects of A and tests of the effects of B). Data are presented as “mean ± standard deviation”, and listed in the order of the levels as mentioned in this paragraph, unless stated otherwise.

Results

Before discussing the qualitative and quantitative features of our observations, it is useful to discuss some more general features of the data. As in our previous experiments, participants generally had no idea about the number of IHPs and IP used, which they overestimated by at least two. This implies that they could not rely on expectations regarding the ball trajectory and thus had to use online visual information. Importantly, before discussing any results concerning the analyses of hand movements we first report the findings of the assessment of fixation maintenance. Gaze direction was aligned to its average value over the first 200 ms, so as to determine the within-trial variations in gaze direction (Fig. 5). While our participants generally were quite confident that they accurately gazed at the fixation point, some showed an occasional saccade to the ball and back to the fixation point. Participant 4 actually appeared to follow the ball in all trials. We calculated the range of gaze angles adopted during each trial from ball release until 150 ms before contact. The latter moment in time was chosen because the EyeLink system inevitably restricts the field of view due to the small cameras placed just below the eyes. Given the head orientations used in the experiment, the ball in some cases may have become occluded by the camera(s) later during ball approach. Any change in gaze direction after this moment might thus be an artifact of wearing the EyeLink system. Moreover, we mainly aimed to ensure that the quality of fixation did not differ between the fixation conditions, such that our analyses of hand movements would not be confounded by variations in the quality of fixation. Changes in gaze direction occurring just before interception are irrelevant in this respect, because visual information picked up 150 ms before interception (i.e., the visuomotor delay) cannot be used to modulate hand movements before interception. We decided to exclude those participants for which the gaze angle range exceeded 3° in more than one of the 24 trials, which led to the exclusion of Participants 1, 2, 4, and 9 from the analyses of the main experiment.Footnote 1

Fig. 5
figure 5

Single trial horizontal gaze deviations (as a function of the time remaining before contact, TTC) for all participants (indicated in each panel) observed in the experiment conducted to assess the quality of fixation (aligned relative to the average of the first 200 ms). Black lines correspond to fixating forward and grey lines to fixating rightward. The vertical dashed lines correspond to 150 ms before interception, the moment until which the range of gaze directions in each trial was calculated

Because we did not fixate the head, gaze fixation could involve combined opposite rotations of the head-in-world and the eye-in-head (vestibulo-ocular reflex, VOR). We determined the relation between azimuthal head-in-world Fick angles as a function of the azimuthal eye-in-head Fick angles, because a consistent negative relation indicates the presence of VOR. Most participants indeed showed a VOR in the second half of each trial for both fixation directions, although its strength differed over trials and participants. The eye-in-head movements thus compensated for changes in head orientation. We will discuss this finding more extensively below when interpreting the observed hand movements.

It is interesting to note that while the included participants could accurately maintain fixation (i.e., they were able to “look not at the balls”), their performance levels were reasonably high. Whereas catching success in this task (under unrestricted vision) is normally above 95% (Dessing et al. 2005, 2009), the participants in the present experiment caught 79.7% of the balls. This catching success appeared to be similar for the fixation conditions (81.3 and 78.2%), IBPs (76.1 and 83.2%), and IHPs (79.6, 81.2, and 78.2%), as well as IPs (81.8, 80.2, 77.8, and 78.8%). Although part of this decreased performance most likely was due to inadequate timing of the grasp, the present study focused exclusively on the effects of fixation on hand positioning.

Figure 2b–d presents a vector-based illustration of the model predictions of the effects of fixation direction on the movement biases during catching. Figure 6 shows these predictions more explicitly by giving the kinematics predicted from simulations of a version of our catching model (see Appendix). For comparison we simulated the situation in which the ball is visually pursued (Fig. 6a; condition not included in the experiment). The effect of fixation on the predicted kinematics is provided in Fig. 6b (forward fixation) and Fig. 6c (rightward fixation) and these panels clearly illustrate that fixation direction should influence the hand movement bias. With forward fixation the hand movements deviate rightward compared to the situation of visual pursuit and with rightward fixation they deviate leftward. This is most evident in the figure for balls starting at the far IBP. This prediction was examined using analyses of movement biases occurring early and late during the catching movement, as well as the average bias throughout the movement.Footnote 2 Fig. 7 shows the individual hand movements. Although this figure shows that there was a substantial degree of inter- as well as intra-individual variation in the hand movements (see “Individual differences”), it also suggests that fixation did not induce consistent effects across conditions and participants. This was corroborated by the statistical analyses reported next.

Fig. 6
figure 6

Simulated predictive kinematics of Dessing et al.’s (2005) catching model (see Appendix), presented for the situation when the ball is pursued (a), when fixating forward (b), and when fixating rightward (c). TTC represents the time remaining before contact

Fig. 7
figure 7

Individual hand movements (averaged over the two repetitions for each condition, unless one of these was omitted from the analyses) as a function of the time to contact (TTC) from 0.8 s before contact until contact for the two fixation conditions (dark grey fixating forward, light grey fixating rightward). In each panel, the inset represents a schematic top-view of the ball trajectories; the thick line represents the trajectory corresponding to the panel

Movement initiation

In simulating our model, we had to assume that initiation occurred at some point in time, because the model does not include control of movement initiation. Thus, our predictions concerning the effects of fixation depended on the assumption that fixation did not substantially influence the timing of movement initiation. This assumption was supported by the absence of any effect of Fixation on T ini. As can be seen in Table 1, movement initiation did occur earlier for balls starting at the near IBP than for balls starting at the far IBP. Post hoc analyses of the effect of IHP showed that initiation occurred earlier when the hand started far to the right, compared to the middle IHP. Figure 8b shows averages of T ini for the IP × IHP interaction [F(6, 42) = 6.63, P < 0.001, η 2p  = 0.49], post hoc analyses of which revealed that movements were initiated later from the near IHP, compared to the other IHPs, but only when the ball approached the near IP.

Table 1 Main effects
Fig. 8
figure 8

a Time-averaged hand movements as a function of the time to contact, TTC, (plus/min between-subject standard errors per sample) from 0.8 s before contact until contact for the two fixation conditions. To compute the average movements, the movements for each participant were averaged over the two repetitions (unless one of these was omitted from the analyses) and the resulting movements were subsequently averaged over participants. b Moment of initiation (T ini) for all combinations of the interception point (IP) and the initial hand position (IHP). c Early movement bias (ΔX h-early) for all combinations of the initial ball position (IBP) and IP. Error bars indicate standard errors

Early movement bias

Figure 6 clearly shows that the effect of Fixation should be apparent already in the earliest phase of the movement and for this reason we analyzed the early movement bias, ΔX h-early. Contrary to the predictions, there was no main or interaction effect of Fixation on the early movement bias (as can be appreciated from the time-averaged hand movements presented in Fig. 8a). However, the early hand movements were affected by other factors. ΔX h-early was more leftward for balls approaching from the near than from the far IBP (Table 1). ΔX h-early was also more leftward the further to the right the ball passed (Table 1; all differences significant). These two factors also interacted [F(3, 27) = 5.04, P < 0.01, η 2p  = 0.42; Fig. 8b]. Post hoc analyses showed that the early movement bias only differed between balls moving to IP2 and IP3 (significant for both IBPs). In addition, for the far IBP the difference in ΔX h-early between the nearest two IPs did not reach significance.

Final movement bias

While our predictions pertained mainly to the biases arising during the movement, fixation may be expected to influence the final movement bias (at interception) in the same direction as these movement biases (see Fig. 6). As mentioned before, Fig. 8a, which depicts the time-averaged hand movements, clearly shows that, on average, fixation did not affect movement biases. Contrary to our expectations, no significant effects of Fixation on CE HPI were obtained.

The final movement bias was affected by some other factors. The effect of IP showed that the hand was positioned significantly more to the left for IP4 compared to the other IPs (Table 1). Due to the relation between CE HPI and catching performance, we further examined the effect of IP by breaking down the averages for the misses and hits. Figure 9 shows that whereas for IP2 the misses (black) mostly comprised the right part of the distribution, for IP4 the distribution of the misses was much more leftward. This was confirmed statistically, by comparing CE HPIs for misses and hits (IP2: t(59.1) = 2.94, P < 0.01; IP4: t(103.7) = −7.45, P < 0.001). Finally, post hoc analyses of the effect of IHP (Table 1) did not show any significant differences, but the effect identified by the ANOVA appeared to result from a more rightward hand position at interception when the hand started at the far IHP.

Fig. 9
figure 9

Distributions of final movement bias (CE HPI) as a function of the lateral position of the interception point (IP) for trials in which the ball was missed (black) and caught (grey). The dotted vertical lines indicate the average values of hits and misses for each IP

Our simulations suggested that the final movement bias would deviate leftward due to fixation through an underestimation of peripheral ball velocity, particularly at further IPs (see Appendix). Comparison of the average CE HPI observed here (Table 1) with those in a previous experiment with no restrictions regarding gaze direction (using the same participants; Dessing et al. 2009, see their Table 1) showed that the CE HPIs obtained in the present experiment were not more leftward than those in that more natural situation. This suggests that underestimation of peripheral ball velocity did not appear to have influenced the hand position at interception in our experiment.

Average movement bias

Our predictions with respect to the average movement bias matched those for the early movement bias, as discussed before. The data again underscored the limited predictive value of our model regarding the effects of gaze direction. The statistical analyses for ΔX h-av only revealed effects of IBP and IP, and post-hoc analyses showed that the average movement bias was more leftward for balls approaching from near than from the far IBP and more leftward the further to the right the ball passed the participant (Table 1; all differences significant).

Individual differences

The analyses presented above all focused on group effects, thus averaging out individual variations. Compared to our earlier catching experiments, however, there was a substantial degree of inter- as well as intra-individual variation in the hand movements (see Fig. 7). More detailed inspection of the individual movement traces suggested some specific differences in the effects of Fixation for the two initial ball positions, even though this Fixation × IBP interaction was not significant for any of the variables. We plotted the individual values of all dependent variables for this interaction in Fig. 10. Given the small number of repetitions, we could not statistically analyze the individual differences and therefore only discuss these qualitatively.

Fig. 10
figure 10

Individual averages for all dependent variables for all combinations of fixation direction and initial ball position (IBP); participants coded as in Fig. 5. From left to right: moment of initiation (T ini), early movement bias (ΔX h-early), final movement bias (CE HPI), and average movement bias (ΔX h-av)

Figure 10 illustrates that the pattern of results varied considerably over participants. Some showed effects of IBP consistent across fixation directions and vice versa (whether they were present or absent, e.g., P1, P3, and P11 for ΔX h-early, P2 for CE HPI, P3, P7, and P10 for T ini, and P9 for ΔX h-av). Others appeared to show differential effects of Fixation for the two IBPs (e.g., P1, P7, and P10 for CE HPI and ΔX h-av). Two participants (P3 and P9) showed a larger leftward average movement bias for rightward fixation irrespective of the ball’s starting position. Our model, however, predicted the effect to be present already early during movement execution. Thus, Fixation influenced the hand movements for some participants, but the direction and magnitude of this effect (also as a function of the IBP) varied considerably and did not comply with our model predictions.

Discussion

Psychophysical evidence suggests that visuomotor transformations operate on gaze-centered representations (Henriques et al. 1998; Khan et al. 2007; McIntyre et al. 1997, 1998; Vetter et al. 1999), a suggestion that is backed up by neurophysiological findings (e.g., Batista et al. 1999; Buneo and Andersen 2006; Buneo et al. 2002; Desmurget et al. 1999; Medendorp et al. 2003, 2005). Using a neural network model we previously showed that details of catching movements, particularly the consistent leftward biases (Fig. 1), can also be understood in terms of the underlying gaze-centered representations (Dessing et al. 2005). In particular, we argued that biases arising within the representation of target motion are modulated during the transformation from gaze-centered to body-centered coordinates. A recent experiment supported that movement biases arise within the early representations of target motion (Dessing et al. 2009). The present study was designed to evaluate the second aspect of our model, namely a specific modulation of movement biases within the visuomotor transformation.

We asked participants to fixate, rather than to visually pursue the ball, because the model predicted that fixation direction would systematically modulate the movement biases within the visuomotor transformation (Figs. 2 and 6). First of all, the participants performed reasonably adequate: they caught about 80% of all balls, which is only 15% less than when they were allowed to gaze at the ball (Dessing et al. 2009). For two of the used passing positions the misses appeared to involve a misplacement of the hand (a rightward error for IP2 and a leftward error for IP4); the other balls thus appeared to be missed due to inadequate timing of the grasp. The main finding, however, was that fixation direction did not influence the observed hand movements in any way. Even though this result was influenced by considerable inter-individual variability (see also below), this refutes our implementation of the visuomotor transformation (cf. Dessing et al. 2005). In hindsight, this may not be entirely surprising; our initial proposal was rather extreme in that the hand-target vector only used the component perpendicular to the gaze direction, which thus neglects an entire dimension. In contrast, the present results indicated that all dimensions of the movement vector are to be considered.

Although our results invalidate our specific implementation of the visuomotor transformation, they do not refute the gaze-centered organization of the position input to this transformation (see references above) nor do they rule out that biases may have occurred in this transformation. Indeed, many authors have claimed that biases arise within the visuomotor transformation, for instance through parcellation and linear approximations (Flanders and Soechting 1990; Flanders et al. 1992). Other behavioral findings and theoretical contributions (Blohm and Crawford 2007; Henriques and Crawford 2002; Henriques et al. 2003) suggest that these biases are only small, and that nonlinear aspects of the visuomotor transformation (related to the eye–head–shoulder–arm geometry) are represented rather accurately. We regularly observed substantial biases (e.g., movements in the wrong direction, more than 10 cm in amplitude, see also Dessing et al. 2005, 2009). The errors typically purported to arise within the visuomotor transformation are considerably smaller (e.g., Flanders et al. 1992). The fact that these biases are modulated through early and late visual occlusion (Dessing et al. 2009) and depend on the specific shape of the trajectory (see Montagne et al. 1999) strongly suggests that they predominantly arise before this movement vector is calculated, within the representation of target motion.

As argued before, biases may arise within representations of target motion through the low sensitivity to visual (non-gravitational; Zago et al. 2009) acceleration (Craig et al. 2005; Dessing et al. 2005, 2009; Trewhella et al. 2003; Werkhoven et al. 1992). With enough viewing time visual target acceleration can be partly taken into account (≥500 ms; Bennett et al. 2007), probably through history-dependent mechanisms using changes in the velocity signal (Price et al. 2005; Schlack et al. 2007). Due to the high curvilinearity of our target trajectories (Fig. 4d) it is unlikely that such a mechanism performs accurately at initiation (about 500 ms after ball release). For our ball trajectories, velocity signals themselves may also be affected due to the small initial displacement in azimuthal and radial direction. These effects on the coded target motion (through the underlying velocity and acceleration signals) will pass through the visuomotor transformation to influence the hand movements. In the following, we discuss how this may yield a leftward movement bias in our catching task.

The fact that the observed hand movements were similar to those during pursuitFootnote 3 is consistent with a representation of target motion in world-centered coordinates (i.e., uninfluenced by body configuration). A world-centered representation of target motion has been reported to exist in the medial superior temporal area in posterior parietal cortex (Ilg et al. 2004; Inaba et al. 2007), electrical stimulation of which also induces biases in interceptive reaching (Ilg and Schumann 2007; see also Schenk et al. 2005). More specifically, this area appears to be sensitive to horizontal and vertical motion in the frontoparallel plane, which relates to the rates of change of the target’s azimuth and elevation angle, and radial motion (e.g., Maunsell and van Essen 1983; Tanaka and Saito 1989). In Dessing et al. (2009), we argued that the leftward bias may originate from the fact that azimuthal (i.e., angular) motion underestimates the actual lateral motion vector perpendicular to the eye-ball vector (i.e., it neglects distance-dependency). This explanation can be formulated more generally as an imbalance in the representation of radial and azimuthal target motion (for the present argument, it suffices to focus only on the horizontal plane). In this formulation, target motion refers to velocity as well as acceleration, as indicated above.

An underestimation of azimuthal motion relative to motion in radial direction for our ball trajectories indeed predicts a leftward bias, because the azimuthal component always points rightward relative to our set-up. We implemented an underestimation of azimuthal target motion in an adapted version of our catching model (i.e., with a full 3D coding of the movement vector, see Appendix). Simulations of this model confirmed that this model could indeed describe the leftward bias observed in our experiments (see Fig. 11). Because the resulting leftward bias is also towards the eyes/head, the imbalance could be related to an evolved protective tendency (Neppi-Mòdona et al. 2004). However, because the leftward bias is specific to the used curvilinear ball trajectories (see Arzamarski et al. 2007, Montagne et al. 1999), a perceptual origin seems more likely (see also Harris and Drga 2005; Peper et al. 1994).

Fig. 11
figure 11

Simulated predictive kinematics for the current task of the version of our catching model presented in Appendix

When a target is presented in the periphery, its retinal eccentricity may be exaggerated (Bock 1986; Enright 1995; Henriques et al. 1998; Henriques and Crawford 2000) and its velocity be underestimated (e.g., Brooks and Mather 2000; Johnston and Wright 1986; Tynan and Sekuler 1982). Our simulations of these effects (see Appendix) showed that they may substantially influence movement biases. Therefore, the fact that the observed hand movements were very similar to those found when the ball is pursued (Dessing et al. 2005, 2009; see footnote 3), suggests that these effects did not play a substantial role in our experiment. With respect to coded target position, this might be related to the fact that the ball remained in view throughout the trial. This means that targets did not have to be represented in memory, whereas exaggeration of retinal eccentricity is typically probed through memorized target positions. The latter aspect may also limit direct comparison with the results of those studies, even when many aspects of the design are similar to ours, e.g., Beurze et al. 2006, Khan et al. 2007. It cannot be excluded that the retinal eccentricity of both ball and hand was exaggerated, inducing small biases in the movement vector. However, if so, this effect should have been small for hand position coding, because the hand was nearly invisible in our task (see also Appendix). The coded hand position thus probably depended more strongly on proprioceptive signals (van Beers et al. 1999; Graziano 1999; see also Beurze et al. 2006; Sober and Sabes 2003, 2005).

Since we did not fixate the head, gaze fixation may have involved small head movements in combination with eye movements in opposite direction (dynamic VOR). Most participants indeed showed this behavior, particularly in the second half of the trial. In our model we ignored these effects, because they do not affect the gaze-centered ball coordinates. However, disentangling gaze-centered representations into its constituents is particularly relevant for situations in which the head is free to move (e.g., Klier et al. 2001), because eye and head direction signals play a role in the visuomotor transformation (Blohm and Crawford 2007; Buneo and Andersen 2006; Crawford et al. 2004). This holds for positional as well as velocity signals: the coding of world-centered target motion requires integration and transformation of different velocity signals, for which the 3D rotational geometry of head and eyes has to be taken into account (Blohm et al. 2008). Indeed, representations of moving targets appear to involve combinations of head- and eye-centered signals (Neppi-Mòdona et al. 2004). Accounting for catching behavior in more detail most likely requires a more exhaustive implementation of target motion coding, in which detailed effects of the ball-eye–head(–body) configuration are considered. Such modeling should be constrained by findings from catching studies in which a wider range of body-in-world, head-in-body, eye-in-head, and ball-on-retina configurations are considered.

The observed hand movements varied considerably over participants (see Figs. 7 and 10), which probably contributed to the fact that only a small number of effects and interaction were significant. None of the participants appeared to show the effects of Fixation as predicted by the model, which corroborates our conclusions with respects to the group statistics. Biases arising within target motion representations may thus differ between persons, in terms of their magnitude as well as their detailed pattern. Such individual differences may arise at different levels of the visuomotor transformation. Individuals may for instance differ in their reliance on vision-based movement updates, which may relate to the underlying control strategy (Dessing et al. 2002). However, many aspects of our data, such as the movement reversals occurring when the hand started close to the IP, suggested that our participants updated their movements (see also Dessing et al. 2009). Individual differences may also arise due to the employed coordinate frames, for instance in terms of the relative weighting given to eye-, body-, and/or world-centered frames of reference (Blohm and Crawford 2007; Gentilucci et al. 1997; Heuer and Sangals 1998; Khan et al. 2007; Lemay and Stelmach 2005; McIntyre et al. 1998; Neppi-Mòdona et al. 2004; Snyder et al. 1998; Volcic et al. 2007). Although the present study was not designed to pinpoint the origin of individual differences, the results indicate that it may be worthwhile to zoom into such potential variations in the control of interceptive actions.

Conclusion

In the present study, we varied fixation direction during catching to evaluate the prediction (based on our catching model) that movement biases in catching are modulated within the visuomotor transformation. This aspect predicted a gaze direction-dependency of catching movement, which, however, was not observed here. This clearly refuted the visuomotor transformation as implemented in our model. It is concluded that movement biases in catching arise mainly in the early representation of target motion, possibly due to an imbalance in the represented radial and azimuthal target motion, generating a bias towards the eyes. Yet, our results also indicated that detailed aspects of the visuomotor transformation may differ among individuals.