Abstract
Catching a ball involves a dynamic transformation of visual information about ball motion into motor commands for moving the hand to the right place at the right time. We previously formulated a neural model for this transformation to account for the consistent leftward movement biases observed in our catching experiments. According to the model, these biases arise within the representation of target motion as well as within the transformation from a gaze-centered to a body-centered movement command. Here, we examine the validity of the latter aspect of our model in a catching task involving gaze fixation. Gaze fixation should systematically influence biases in catching movements, because in the model movement commands are only generated in the direction perpendicular to the gaze direction. Twelve participants caught balls while gazing at a fixation point positioned either straight ahead or 14° to the right. Four participants were excluded because they could not adequately maintain fixation. We again observed a consistent leftward movement bias, but the catching movements were unaffected by fixation direction. This result refutes our proposal that the leftward bias partly arises within the visuomotor transformation, and suggests instead that the bias predominantly arises within the early representation of target motion, specifically through an imbalance in the represented radial and azimuthal target motion.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Humans effortlessly reach for and manipulate objects in the environment, a skill that strongly depends on vision of the objects in question and, to a lesser degree, vision of the hand. The eye–hand coordination required in such tasks has been investigated from several disciplinary perspectives including neurophysiology, biomechanics and experimental psychology, reflecting the multifaceted nature of its underpinnings. The central question in research on eye–hand coordination is how visual information about the target is transformed into motor commands for the arm and hand (e.g., Baraduc et al. 1999; Bullock and Grossberg 1988; Buneo and Andersen 2006; Crawford et al. 2004; Flanders et al. 1992; Sober and Sabes 2003, 2005). A key issue in this context is how perceptual and motor variables are coded in the brain at different stages of the visuomotor transformation.
Studies on eye–hand coordination have provided evidence for a prominent role of gaze direction at different stages of the visuomotor transformation. For example, the motor performance has been found to depend on gaze direction in a variety of tasks (e.g., Mrotek and Soechting 2007; Neggers and Bekkering 2000; Roerdink et al. 2008; Soechting et al. 2001). Furthermore, a large body of research suggests that the visual memory of stationary spatial targets is represented and updated in a gaze-centered reference frame (Henriques et al. 1998; Khan et al. 2007; McIntyre et al. 1997, 1998; van den Dobbelsteen et al. 2004; van Pelt and Medendorp 2007; Vetter et al. 1999). Such gaze effects have also been reported for auditory spatial targets (Pouget et al. 2002) and perception of heading direction using (neck) proprioception (Ivanenko et al. 2000). These behavioral observations are complemented by neurophysiological reports of neural activity in brain areas involved in visuomotor transformations (Andersen et al. 1985; Batista et al. 1999; Buneo et al. 2002; Lacquaniti et al. 1995; Medendorp et al. 2003, 2005; Pesaran et al. 2006).
Many real-life tasks involve non-stationary objects (e.g., avoiding an approaching car or catching a baseball). Such tasks are of interest for motor control theory because they are more stringently constrained by the environment than simple point-to-point reaching movements (Beek et al. 2003). We previously studied a right-handed catching task involving three-dimensional curvilinear ball trajectories, and suggested, among others, that gaze direction played a prominent role in these movements. In our task, the hand could be moved along a lateral bar, which the balls passed at a position that could not be predicted by the participants before the start of the trial. The hand movements showed a distinct leftward bias (Fig. 1); for instance, when the hand started at the ball’s future passing position (unbeknownst to the participant), it was first moved leftward (i.e., away from the starting position), after which its movement direction was reversed to catch the ball at the starting position (see also Jacobs and Michaels 2006). These results suggested that the movement vector for interception might be coded in gaze-centered coordinates (as explained below), while it apparently is not based on an accurate prediction of the ball trajectory but updated during execution (see also Montagne et al. 1999; Peper et al. 1994).
Capitalizing on the vector integration to endpoint model (Bullock and Grossberg 1988), we formulated a dynamical neural network model to account for our experimental observations. This model includes a continuous visuomotor transformation in that movement commands are generated online, for which it uses separate ball position and velocity signals (Dessing et al. 2002, 2005). Interestingly, the motor commands generated by the model did not rely on explicit predictions of the ball’s passing position (i.e., the interception point, IP), in contrast with typical implementations of robot interception (e.g., Riley and Atkeson 2002). The model accounts for the leftward movement biases based on two effects, namely through biases arising within the representation of target motion and through modulations arising within the transformation of movement commands into body-centered coordinates (Fig. 1; see next paragraphs). The present study was designed to elucidate the gaze direction-dependency of the latter.
Figure 2a depicts a schematic of the origin of movement biases as implemented in the model. Firstly, no direct influence of the ball’s visual acceleration on the movement commands is implemented other than through movement updates; consistent with perceptual studies (e.g., Craig et al. 2005; Trewhella et al. 2003; Werkhoven et al. 1992) and other studies on interception (Brouwer et al. 2002; Port et al. 1997; but see Zago et al. 2009), the model only uses visual target position and velocity signals (see also Smeets and Brenner 1995). Not taking the visual acceleration into account means that at any instant the future visual ball displacement is underestimated in case of acceleration (and overestimated in case of deceleration), which biases the effective target position and thereby the hand movements. This effect arises within the representation of target motion, which may be considered the early stage of the visuomotor transformation. The movement vector in the model is coded in a coordinate frame that rotates with the gaze direction; a leftward bias arises when these vectors are transformed into a body-centered (Cartesian) coordinate frame, because only the lateral gaze-centered component is taken into account. Because this means that movements are essentially planned towards the gaze direction, only the visual acceleration in this direction influences movement biases. Not taking this acceleration into account in body-centered coordinates yields an underestimation of the future rightward ball displacement, which explains the leftward direction of the movement biases.
We recently performed an experiment that supported the suggestion that movement biases arise in the representation of target motion (Dessing et al. 2009). This experiment involved occlusion of the ball in different phases of its approach, with the occlusion conditions presented either in random order or in blocks. A smaller leftward movement bias was observed when the initial part of the ball trajectory was occluded, but only during the randomized presentation. Moreover, during the blocked presentation the leftward movement bias was smaller without occlusion than with late occlusion. While these effects do not prove that target acceleration is not taken into account altogether (cf. Bennett et al. 2007), they are consistent with the hypothesis that the future rightward ball displacement is underestimated, as would be expected if the gaze-centered lateral visual acceleration is not fully taken into account. The dependence on presentation order was interpreted to reflect strategies used to correct for this deficiency. The present study examined the validity of the modeled modulation of movement biases within the visuomotor transformation. Because this effect depends on gaze direction, we examined the effect of fixation direction on catching movements.
Figure 2b–d schematically illustrates how, according to our model (Dessing et al. 2005), movement biases are modulated by fixation direction (b: gaze on the ball; c: fixate forward; d: fixate rightward). The dark grey arrows from the ball indicate the gaze-centered components of the ball’s acceleration, which is not directly taken into account within the representation of target motion. Because the modeled visuomotor transformation is based on only the lateral gaze-centered component, movement commands are biased in a direction opposite to this component of the acceleration (grey arrow from the hand, perpendicular to the line of gaze). The actual movement bias in body-centered coordinates is the projection of this gaze-centered bias onto the hand movement-axis (black arrow from the hand). Figure 2c shows that fixating forward results in symmetric movement biases. Similarly, Fig. 2d illustrates the increased leftward movement bias caused by rightward fixation.
In testing these qualitative model predictions we recognized that fixation may influence the representation of target motion in ways currently not implemented in the model. During fixation, the ball is located in the retinal periphery, which may introduce small movement biases through exaggerations of retinal eccentricity (Bock 1986; Enright 1995; Henriques et al. 1998; Henriques and Crawford 2000). In addition, perceived target velocity is reduced in the periphery (e.g., Brooks and Mather 2000; Johnston and Wright 1986; Tynan and Sekuler 1982). To assess the impact of these effects on the predictions provided in Fig. 2, we simulated our catching model with all combinations of these effects (Appendix). Even though both effects may substantially influence the predicted hand movements, these simulations confirmed the validity of our prediction that hand movements should be biased leftward when fixating to the right compared to when fixating forward, due to the hypothesized gaze direction-dependent visuomotor transformation.
While the present experiment was motivated from a particular catching model, its scope is broadened by the fact that the effects of fixation on tasks requiring dynamic visuomotor transformations have not been investigated before. For instance, before conducting this study, we did not know whether participants could actually perform the task. Recordings of gaze direction revealed that only two-thirds of our participants could maintain fixation adequately enough to be included in the study. In the remaining participants, catching performance was slightly reduced compared to that typically observed for unconstrained viewing. Importantly, the spatial features of the hand movements were unaffected by fixation direction. This is inconsistent with the gaze direction-dependent visuomotor transformation implemented in our catching model. While the results were somewhat clouded through considerable inter-individual variability, this suggests that the movement biases observed here and before predominately arise within the representation of target motion.
Methods
Participants
Twelve right-handed participants (six male, six female, mean age 24.5 years, range 19-52; mean handedness quotient 96.6, Oldfield 1971) volunteered to participate in the experiment. They reported normal or corrected-to-normal vision (stereo-acuity < 40 arcsec; Stereo Fly Test, Titmus Optical Co., Inc., Chicago, Illinois) and gave their informed consent before participation. Directly prior to the main experiment, participants also participated in a catching experiment involving visual occlusion (Dessing et al. 2009).
Experimental set-up
The experimental set-up was largely the same as that used by Dessing et al. (2005, 2009). Participants sat in a chair, while catching approaching balls passing them on the right (see Figs. 3a, 4 for an illustration of the ball coordinates, with x being positive in rightward direction, y in forward direction, and z in upward direction). They could move their right hand in the lateral direction only, along a horizontal bar positioned at shoulder level (i.e., a Velcro band strapped around the bar was connected to another band around the wrist). Balls (diameter 8 cm; mass 0.145 kg) approached the participants along one of eight trajectories that were defined by two initial ball positions (IBPs, 25 cm apart, referred to as near and far; the near IBP was located 32.5 cm to the right of the center of the chair) and four interception points (IPs, 15 cm apart, referred to as IP1, IP2, IP3, and IP4, respectively; IP1 was located 22.5 cm to the right of the center of the chair; Fig. 3a). The forward and vertical coordinates of the IPs relative to the bar were determined based on pilot measurements of participants holding a ball stationary at the IPs; on average, the balls were held 7.5 cm in front of and 7.0 cm above the horizontal bar (which were thus defined as the forward and vertical coordinates of the IPs). The center of the bar was 1.07 m above the ground and, on average, 17.5 cm in front of the eyes. The hand started at one of three initial hand positions (IHPs, 15 cm apart, located in between the IPs, referred to as near, middle, and far; Fig. 3a).
Balls were suspended from the ceiling (at a height of 3.30 m) using plastic coated steel wires (length: 2.50 m, diameter 0.2 mm) with a little magnet at the lower end. A metal plate was attached to the ball with a 5 mm long Kevlar wire. Prior to ball release, the coated steel wire was attached to a screw embedded in the ball (Fig. 3b), and the ball was suspended at one of the IBPs (2.04 m high, 3.5 m in front of the IPs) by connecting the metal piece to an electromagnet. When the ball was caught the magnet usually detached from the ball due to the impact. The glow-in-the-dark balls were charged using three UV-emitting fluorescent tubes, two suspended at about 15 cm below the IBPs and one behind the participant. A fixation point in the form of a 1 cm2 red light emitting diode (LED) was presented on a wall directly behind the IBPs (height: 1.52 m). Depending on the fixation condition, it was placed either forward of the middle of the chair or 0.94 m to its right (i.e., at an average lateral visual angle of 14°). Besides this LED, the tubes, balls, and control computer were the only light sources present during the experiment; light from the control computer was blocked from the participant by means of a screen. Participants could just see their hand when specifically asked to look at it, but they reported not seeing it during the actual trials.
An Optotrak 3020 camera system (Northern Digital, Inc., Waterloo, Canada), positioned 2.6 m on the right of the subject at a height of approximately 2 m, registered the position (at 250 Hz) of a marker placed on a piece of Polystyrene taped to the back of the hand. The Optotrak recordings were triggered at the moment of ball release. The Optotrak system was also used to measure the flight times (1,182 ms, accuracy 2 ms) after the experiment with a camera placed perpendicular to the lateral axis through the IPs, using the occlusion time of a marker placed just behind the ball’s passing position.
In a separate session (run about 1.5 years after the initial experiment) we examined whether our participants could accurately maintain fixation during our task. Binocular gaze direction (pupil-corneal, 250 Hz) was recorded using an EyeLink II system (SR Research Ltd., Osgoode, Ontario, Canada) attached to a head band. At the moment of ball release a synchronization pulse was sent to and recorded by the EyeLink computer. Head orientation was measured using Optotrak (at 250 Hz), through three markers on the head band. We also recorded the position (using an Optotrak pointer) of six points close to the eyes together with the head markers, for reconstruction of the 3D eye-in-head positions during the trial using only the head markers. This session involved a random selection of 12 conditions from the initial experiment for each fixation condition (i.e., 24 trials in total, which differed over the participants).
Procedure
Participants were instructed to catch the approaching ball with their right hand and were free to move their head and body while doing so. They were informed when the next trial was about to start by the instruction to fixate at the LED. Balls were released 500-1,500 ms (randomized) after the experimenter pressed a key. When the trial ended, participants were instructed to move their hand to a comfortable position. A screen was placed in front of the subject to prevent him or her from predicting the conditions of the upcoming trial (i.e., from vision of the experimenter suspending the next ball). The ball was attached to the wire and suspended at the IBP corresponding to the new trial. Another ball was suspended at the other IBP to prevent the use of foreknowledge of the ball trajectory. Subsequently, the experimenter manually guided the hand of the subject to the new IHP. All conditions (two fixation directions × two IBPs × four IPs × three IHPs) were presented twice, resulting in a total of 96 trials. The two fixation conditions were presented in blocks, the order of which was counterbalanced across participants. Within these blocks, trials were presented in random order. Before each block, eight practice trials were run, in order to familiarize the participants with the task, and particularly with the fixation condition. During the experiment, the experimenter visually checked whether the participants kept fixating during the trial and participants were also asked to directly report if they accidentally broke fixation at any time during the trial. The few trials in which they indicated that fixation was broken were repeated at the end of the fixation block. After the experiment, participants were asked to judge the number of IPs and IHPs used in the experiment. Running the experiment took about 90 min.
The examination of gaze direction started with the built-in calibration procedure of the EyeLink system (nine point grid, calibration accuracy ~ 0.6°). For two participants the calibration was repeated during the session, because of possible camera movement relative to the head. After each calibration, we recorded the position of six reference points on the head, as well as a reference head position (when fixating forward). The experiment started after about five practice trials; an additional practice trial was run when the fixation direction changed. All other procedures matched those of the original experiment. During the session, participants were frequently asked to assess their own fixation performance. While some participants indicated that the task was difficult, all believed they maintained fixation. After the experiment, we once more recorded the reference head orientation. This session lasted about 45 min in total (60 min when an additional calibration was required). All procedures were approved by the Ethics Committee of the Faculty of Human Movement Sciences of VU University Amsterdam.
Data reduction
Both successful (i.e., trials in which the ball was caught; n = 592) and unsuccessful trials (n = 146) were included in the analyses. Missing values in the hand marker position were reconstructed using a cubic spline interpolation (i.e., maximally 25 consecutive samples). The position signals were low-pass filtered using a fourth-order recursive Butterworth filter (cut-off frequency: 10 Hz). In total 30 trials had to be excluded, because the hand was not positioned correctly by the experimenter, or because the wrong wire was attached to the ball (for all conditions at least one good trial was retained). The lateral position of the hand marker was used to calculate three dependent variables, as outlined in the following.
While our model predictions only concerned spatial biases in the hand movements, we deemed it informative to also report results on the timing of movement initiation, if only for comparison with previous findings. Movement initiation was defined as the moment at which the absolute lateral hand velocity exceeded 2% of the first velocity peak that was larger than 0.05 m s−1. The analyses of movement biases throughout execution focused on three variables, describing early, final, and average biases in the hand movement, respectively. The early movement bias (ΔX h-early) was defined as the hand position at 200 ms after initiation (or at contact, if movements were initiated less than 200 ms before interception), expressed relative to the position of a minimal-jerk trajectory (Flash and Hogan 1985) from the hand position at initiation to the IP at that moment. Comparison to this reference trajectory was applied in order to minimize the effects of variations in movement time and movement distance on ΔX h-early. The final movement bias was defined as the constant error of the hand position at interception (CE HPI), which is the lateral hand position at interception relative to the IP (i.e., CE HPI > 0 indicates a final hand position to the right of the IP). The average bias over the entire movement from initiation to interception (average movement bias, ΔX h-av) was defined as the average lateral hand position in this period relative to a position exactly in between the hand position at initiation and the IP.
In total three trials recorded during the fixation assessment (all for different participants) were not used in the analyses (due to technical or procedural errors). To reconstruct gaze direction, we used the average of the unfiltered left and right eye-in-head angles outputted by the EyeLink system, in combination with the unfiltered head orientation and eye positions as measured using Optotrak. The latter was used to correct for small deviations in the required fixation direction due to lateral head translation (for instance occurring for relatively short participants, when the hand occupied a more eccentric position). The average of the reference head positions recorded before and after the session was taken as the zero orientation of the head.
Statistical analyses
All dependent variables were examined using repeated measures analyses of variance (ANOVAs; P < 0.05) with within-subject factors Fixation (forward and rightward), IBP (near and far), IP (IP1, IP2, IP3, and IP4) and IHP (near, middle, and far). When the sphericity assumption was violated Huynh-Feldt corrected degrees of freedom were applied. Paired-samples t tests were used for post hoc analyses, with Sidak step-down-adjusted P values for each test of main effects (i.e., for an A × B interaction, separate adjustments were made for tests of the effects of A and tests of the effects of B). Data are presented as “mean ± standard deviation”, and listed in the order of the levels as mentioned in this paragraph, unless stated otherwise.
Results
Before discussing the qualitative and quantitative features of our observations, it is useful to discuss some more general features of the data. As in our previous experiments, participants generally had no idea about the number of IHPs and IP used, which they overestimated by at least two. This implies that they could not rely on expectations regarding the ball trajectory and thus had to use online visual information. Importantly, before discussing any results concerning the analyses of hand movements we first report the findings of the assessment of fixation maintenance. Gaze direction was aligned to its average value over the first 200 ms, so as to determine the within-trial variations in gaze direction (Fig. 5). While our participants generally were quite confident that they accurately gazed at the fixation point, some showed an occasional saccade to the ball and back to the fixation point. Participant 4 actually appeared to follow the ball in all trials. We calculated the range of gaze angles adopted during each trial from ball release until 150 ms before contact. The latter moment in time was chosen because the EyeLink system inevitably restricts the field of view due to the small cameras placed just below the eyes. Given the head orientations used in the experiment, the ball in some cases may have become occluded by the camera(s) later during ball approach. Any change in gaze direction after this moment might thus be an artifact of wearing the EyeLink system. Moreover, we mainly aimed to ensure that the quality of fixation did not differ between the fixation conditions, such that our analyses of hand movements would not be confounded by variations in the quality of fixation. Changes in gaze direction occurring just before interception are irrelevant in this respect, because visual information picked up 150 ms before interception (i.e., the visuomotor delay) cannot be used to modulate hand movements before interception. We decided to exclude those participants for which the gaze angle range exceeded 3° in more than one of the 24 trials, which led to the exclusion of Participants 1, 2, 4, and 9 from the analyses of the main experiment.Footnote 1
Because we did not fixate the head, gaze fixation could involve combined opposite rotations of the head-in-world and the eye-in-head (vestibulo-ocular reflex, VOR). We determined the relation between azimuthal head-in-world Fick angles as a function of the azimuthal eye-in-head Fick angles, because a consistent negative relation indicates the presence of VOR. Most participants indeed showed a VOR in the second half of each trial for both fixation directions, although its strength differed over trials and participants. The eye-in-head movements thus compensated for changes in head orientation. We will discuss this finding more extensively below when interpreting the observed hand movements.
It is interesting to note that while the included participants could accurately maintain fixation (i.e., they were able to “look not at the balls”), their performance levels were reasonably high. Whereas catching success in this task (under unrestricted vision) is normally above 95% (Dessing et al. 2005, 2009), the participants in the present experiment caught 79.7% of the balls. This catching success appeared to be similar for the fixation conditions (81.3 and 78.2%), IBPs (76.1 and 83.2%), and IHPs (79.6, 81.2, and 78.2%), as well as IPs (81.8, 80.2, 77.8, and 78.8%). Although part of this decreased performance most likely was due to inadequate timing of the grasp, the present study focused exclusively on the effects of fixation on hand positioning.
Figure 2b–d presents a vector-based illustration of the model predictions of the effects of fixation direction on the movement biases during catching. Figure 6 shows these predictions more explicitly by giving the kinematics predicted from simulations of a version of our catching model (see Appendix). For comparison we simulated the situation in which the ball is visually pursued (Fig. 6a; condition not included in the experiment). The effect of fixation on the predicted kinematics is provided in Fig. 6b (forward fixation) and Fig. 6c (rightward fixation) and these panels clearly illustrate that fixation direction should influence the hand movement bias. With forward fixation the hand movements deviate rightward compared to the situation of visual pursuit and with rightward fixation they deviate leftward. This is most evident in the figure for balls starting at the far IBP. This prediction was examined using analyses of movement biases occurring early and late during the catching movement, as well as the average bias throughout the movement.Footnote 2 Fig. 7 shows the individual hand movements. Although this figure shows that there was a substantial degree of inter- as well as intra-individual variation in the hand movements (see “Individual differences”), it also suggests that fixation did not induce consistent effects across conditions and participants. This was corroborated by the statistical analyses reported next.
Movement initiation
In simulating our model, we had to assume that initiation occurred at some point in time, because the model does not include control of movement initiation. Thus, our predictions concerning the effects of fixation depended on the assumption that fixation did not substantially influence the timing of movement initiation. This assumption was supported by the absence of any effect of Fixation on T ini. As can be seen in Table 1, movement initiation did occur earlier for balls starting at the near IBP than for balls starting at the far IBP. Post hoc analyses of the effect of IHP showed that initiation occurred earlier when the hand started far to the right, compared to the middle IHP. Figure 8b shows averages of T ini for the IP × IHP interaction [F(6, 42) = 6.63, P < 0.001, η 2p = 0.49], post hoc analyses of which revealed that movements were initiated later from the near IHP, compared to the other IHPs, but only when the ball approached the near IP.
Early movement bias
Figure 6 clearly shows that the effect of Fixation should be apparent already in the earliest phase of the movement and for this reason we analyzed the early movement bias, ΔX h-early. Contrary to the predictions, there was no main or interaction effect of Fixation on the early movement bias (as can be appreciated from the time-averaged hand movements presented in Fig. 8a). However, the early hand movements were affected by other factors. ΔX h-early was more leftward for balls approaching from the near than from the far IBP (Table 1). ΔX h-early was also more leftward the further to the right the ball passed (Table 1; all differences significant). These two factors also interacted [F(3, 27) = 5.04, P < 0.01, η 2p = 0.42; Fig. 8b]. Post hoc analyses showed that the early movement bias only differed between balls moving to IP2 and IP3 (significant for both IBPs). In addition, for the far IBP the difference in ΔX h-early between the nearest two IPs did not reach significance.
Final movement bias
While our predictions pertained mainly to the biases arising during the movement, fixation may be expected to influence the final movement bias (at interception) in the same direction as these movement biases (see Fig. 6). As mentioned before, Fig. 8a, which depicts the time-averaged hand movements, clearly shows that, on average, fixation did not affect movement biases. Contrary to our expectations, no significant effects of Fixation on CE HPI were obtained.
The final movement bias was affected by some other factors. The effect of IP showed that the hand was positioned significantly more to the left for IP4 compared to the other IPs (Table 1). Due to the relation between CE HPI and catching performance, we further examined the effect of IP by breaking down the averages for the misses and hits. Figure 9 shows that whereas for IP2 the misses (black) mostly comprised the right part of the distribution, for IP4 the distribution of the misses was much more leftward. This was confirmed statistically, by comparing CE HPIs for misses and hits (IP2: t(59.1) = 2.94, P < 0.01; IP4: t(103.7) = −7.45, P < 0.001). Finally, post hoc analyses of the effect of IHP (Table 1) did not show any significant differences, but the effect identified by the ANOVA appeared to result from a more rightward hand position at interception when the hand started at the far IHP.
Our simulations suggested that the final movement bias would deviate leftward due to fixation through an underestimation of peripheral ball velocity, particularly at further IPs (see Appendix). Comparison of the average CE HPI observed here (Table 1) with those in a previous experiment with no restrictions regarding gaze direction (using the same participants; Dessing et al. 2009, see their Table 1) showed that the CE HPIs obtained in the present experiment were not more leftward than those in that more natural situation. This suggests that underestimation of peripheral ball velocity did not appear to have influenced the hand position at interception in our experiment.
Average movement bias
Our predictions with respect to the average movement bias matched those for the early movement bias, as discussed before. The data again underscored the limited predictive value of our model regarding the effects of gaze direction. The statistical analyses for ΔX h-av only revealed effects of IBP and IP, and post-hoc analyses showed that the average movement bias was more leftward for balls approaching from near than from the far IBP and more leftward the further to the right the ball passed the participant (Table 1; all differences significant).
Individual differences
The analyses presented above all focused on group effects, thus averaging out individual variations. Compared to our earlier catching experiments, however, there was a substantial degree of inter- as well as intra-individual variation in the hand movements (see Fig. 7). More detailed inspection of the individual movement traces suggested some specific differences in the effects of Fixation for the two initial ball positions, even though this Fixation × IBP interaction was not significant for any of the variables. We plotted the individual values of all dependent variables for this interaction in Fig. 10. Given the small number of repetitions, we could not statistically analyze the individual differences and therefore only discuss these qualitatively.
Figure 10 illustrates that the pattern of results varied considerably over participants. Some showed effects of IBP consistent across fixation directions and vice versa (whether they were present or absent, e.g., P1, P3, and P11 for ΔX h-early, P2 for CE HPI, P3, P7, and P10 for T ini, and P9 for ΔX h-av). Others appeared to show differential effects of Fixation for the two IBPs (e.g., P1, P7, and P10 for CE HPI and ΔX h-av). Two participants (P3 and P9) showed a larger leftward average movement bias for rightward fixation irrespective of the ball’s starting position. Our model, however, predicted the effect to be present already early during movement execution. Thus, Fixation influenced the hand movements for some participants, but the direction and magnitude of this effect (also as a function of the IBP) varied considerably and did not comply with our model predictions.
Discussion
Psychophysical evidence suggests that visuomotor transformations operate on gaze-centered representations (Henriques et al. 1998; Khan et al. 2007; McIntyre et al. 1997, 1998; Vetter et al. 1999), a suggestion that is backed up by neurophysiological findings (e.g., Batista et al. 1999; Buneo and Andersen 2006; Buneo et al. 2002; Desmurget et al. 1999; Medendorp et al. 2003, 2005). Using a neural network model we previously showed that details of catching movements, particularly the consistent leftward biases (Fig. 1), can also be understood in terms of the underlying gaze-centered representations (Dessing et al. 2005). In particular, we argued that biases arising within the representation of target motion are modulated during the transformation from gaze-centered to body-centered coordinates. A recent experiment supported that movement biases arise within the early representations of target motion (Dessing et al. 2009). The present study was designed to evaluate the second aspect of our model, namely a specific modulation of movement biases within the visuomotor transformation.
We asked participants to fixate, rather than to visually pursue the ball, because the model predicted that fixation direction would systematically modulate the movement biases within the visuomotor transformation (Figs. 2 and 6). First of all, the participants performed reasonably adequate: they caught about 80% of all balls, which is only 15% less than when they were allowed to gaze at the ball (Dessing et al. 2009). For two of the used passing positions the misses appeared to involve a misplacement of the hand (a rightward error for IP2 and a leftward error for IP4); the other balls thus appeared to be missed due to inadequate timing of the grasp. The main finding, however, was that fixation direction did not influence the observed hand movements in any way. Even though this result was influenced by considerable inter-individual variability (see also below), this refutes our implementation of the visuomotor transformation (cf. Dessing et al. 2005). In hindsight, this may not be entirely surprising; our initial proposal was rather extreme in that the hand-target vector only used the component perpendicular to the gaze direction, which thus neglects an entire dimension. In contrast, the present results indicated that all dimensions of the movement vector are to be considered.
Although our results invalidate our specific implementation of the visuomotor transformation, they do not refute the gaze-centered organization of the position input to this transformation (see references above) nor do they rule out that biases may have occurred in this transformation. Indeed, many authors have claimed that biases arise within the visuomotor transformation, for instance through parcellation and linear approximations (Flanders and Soechting 1990; Flanders et al. 1992). Other behavioral findings and theoretical contributions (Blohm and Crawford 2007; Henriques and Crawford 2002; Henriques et al. 2003) suggest that these biases are only small, and that nonlinear aspects of the visuomotor transformation (related to the eye–head–shoulder–arm geometry) are represented rather accurately. We regularly observed substantial biases (e.g., movements in the wrong direction, more than 10 cm in amplitude, see also Dessing et al. 2005, 2009). The errors typically purported to arise within the visuomotor transformation are considerably smaller (e.g., Flanders et al. 1992). The fact that these biases are modulated through early and late visual occlusion (Dessing et al. 2009) and depend on the specific shape of the trajectory (see Montagne et al. 1999) strongly suggests that they predominantly arise before this movement vector is calculated, within the representation of target motion.
As argued before, biases may arise within representations of target motion through the low sensitivity to visual (non-gravitational; Zago et al. 2009) acceleration (Craig et al. 2005; Dessing et al. 2005, 2009; Trewhella et al. 2003; Werkhoven et al. 1992). With enough viewing time visual target acceleration can be partly taken into account (≥500 ms; Bennett et al. 2007), probably through history-dependent mechanisms using changes in the velocity signal (Price et al. 2005; Schlack et al. 2007). Due to the high curvilinearity of our target trajectories (Fig. 4d) it is unlikely that such a mechanism performs accurately at initiation (about 500 ms after ball release). For our ball trajectories, velocity signals themselves may also be affected due to the small initial displacement in azimuthal and radial direction. These effects on the coded target motion (through the underlying velocity and acceleration signals) will pass through the visuomotor transformation to influence the hand movements. In the following, we discuss how this may yield a leftward movement bias in our catching task.
The fact that the observed hand movements were similar to those during pursuitFootnote 3 is consistent with a representation of target motion in world-centered coordinates (i.e., uninfluenced by body configuration). A world-centered representation of target motion has been reported to exist in the medial superior temporal area in posterior parietal cortex (Ilg et al. 2004; Inaba et al. 2007), electrical stimulation of which also induces biases in interceptive reaching (Ilg and Schumann 2007; see also Schenk et al. 2005). More specifically, this area appears to be sensitive to horizontal and vertical motion in the frontoparallel plane, which relates to the rates of change of the target’s azimuth and elevation angle, and radial motion (e.g., Maunsell and van Essen 1983; Tanaka and Saito 1989). In Dessing et al. (2009), we argued that the leftward bias may originate from the fact that azimuthal (i.e., angular) motion underestimates the actual lateral motion vector perpendicular to the eye-ball vector (i.e., it neglects distance-dependency). This explanation can be formulated more generally as an imbalance in the representation of radial and azimuthal target motion (for the present argument, it suffices to focus only on the horizontal plane). In this formulation, target motion refers to velocity as well as acceleration, as indicated above.
An underestimation of azimuthal motion relative to motion in radial direction for our ball trajectories indeed predicts a leftward bias, because the azimuthal component always points rightward relative to our set-up. We implemented an underestimation of azimuthal target motion in an adapted version of our catching model (i.e., with a full 3D coding of the movement vector, see Appendix). Simulations of this model confirmed that this model could indeed describe the leftward bias observed in our experiments (see Fig. 11). Because the resulting leftward bias is also towards the eyes/head, the imbalance could be related to an evolved protective tendency (Neppi-Mòdona et al. 2004). However, because the leftward bias is specific to the used curvilinear ball trajectories (see Arzamarski et al. 2007, Montagne et al. 1999), a perceptual origin seems more likely (see also Harris and Drga 2005; Peper et al. 1994).
When a target is presented in the periphery, its retinal eccentricity may be exaggerated (Bock 1986; Enright 1995; Henriques et al. 1998; Henriques and Crawford 2000) and its velocity be underestimated (e.g., Brooks and Mather 2000; Johnston and Wright 1986; Tynan and Sekuler 1982). Our simulations of these effects (see Appendix) showed that they may substantially influence movement biases. Therefore, the fact that the observed hand movements were very similar to those found when the ball is pursued (Dessing et al. 2005, 2009; see footnote 3), suggests that these effects did not play a substantial role in our experiment. With respect to coded target position, this might be related to the fact that the ball remained in view throughout the trial. This means that targets did not have to be represented in memory, whereas exaggeration of retinal eccentricity is typically probed through memorized target positions. The latter aspect may also limit direct comparison with the results of those studies, even when many aspects of the design are similar to ours, e.g., Beurze et al. 2006, Khan et al. 2007. It cannot be excluded that the retinal eccentricity of both ball and hand was exaggerated, inducing small biases in the movement vector. However, if so, this effect should have been small for hand position coding, because the hand was nearly invisible in our task (see also Appendix). The coded hand position thus probably depended more strongly on proprioceptive signals (van Beers et al. 1999; Graziano 1999; see also Beurze et al. 2006; Sober and Sabes 2003, 2005).
Since we did not fixate the head, gaze fixation may have involved small head movements in combination with eye movements in opposite direction (dynamic VOR). Most participants indeed showed this behavior, particularly in the second half of the trial. In our model we ignored these effects, because they do not affect the gaze-centered ball coordinates. However, disentangling gaze-centered representations into its constituents is particularly relevant for situations in which the head is free to move (e.g., Klier et al. 2001), because eye and head direction signals play a role in the visuomotor transformation (Blohm and Crawford 2007; Buneo and Andersen 2006; Crawford et al. 2004). This holds for positional as well as velocity signals: the coding of world-centered target motion requires integration and transformation of different velocity signals, for which the 3D rotational geometry of head and eyes has to be taken into account (Blohm et al. 2008). Indeed, representations of moving targets appear to involve combinations of head- and eye-centered signals (Neppi-Mòdona et al. 2004). Accounting for catching behavior in more detail most likely requires a more exhaustive implementation of target motion coding, in which detailed effects of the ball-eye–head(–body) configuration are considered. Such modeling should be constrained by findings from catching studies in which a wider range of body-in-world, head-in-body, eye-in-head, and ball-on-retina configurations are considered.
The observed hand movements varied considerably over participants (see Figs. 7 and 10), which probably contributed to the fact that only a small number of effects and interaction were significant. None of the participants appeared to show the effects of Fixation as predicted by the model, which corroborates our conclusions with respects to the group statistics. Biases arising within target motion representations may thus differ between persons, in terms of their magnitude as well as their detailed pattern. Such individual differences may arise at different levels of the visuomotor transformation. Individuals may for instance differ in their reliance on vision-based movement updates, which may relate to the underlying control strategy (Dessing et al. 2002). However, many aspects of our data, such as the movement reversals occurring when the hand started close to the IP, suggested that our participants updated their movements (see also Dessing et al. 2009). Individual differences may also arise due to the employed coordinate frames, for instance in terms of the relative weighting given to eye-, body-, and/or world-centered frames of reference (Blohm and Crawford 2007; Gentilucci et al. 1997; Heuer and Sangals 1998; Khan et al. 2007; Lemay and Stelmach 2005; McIntyre et al. 1998; Neppi-Mòdona et al. 2004; Snyder et al. 1998; Volcic et al. 2007). Although the present study was not designed to pinpoint the origin of individual differences, the results indicate that it may be worthwhile to zoom into such potential variations in the control of interceptive actions.
Conclusion
In the present study, we varied fixation direction during catching to evaluate the prediction (based on our catching model) that movement biases in catching are modulated within the visuomotor transformation. This aspect predicted a gaze direction-dependency of catching movement, which, however, was not observed here. This clearly refuted the visuomotor transformation as implemented in our model. It is concluded that movement biases in catching arise mainly in the early representation of target motion, possibly due to an imbalance in the represented radial and azimuthal target motion, generating a bias towards the eyes. Yet, our results also indicated that detailed aspects of the visuomotor transformation may differ among individuals.
Notes
Because this criterion is somewhat arbitrary, we checked the effect of including or excluding more participants (except Participants 4 and 9). This showed that our results are robust across the remaining participants.
A second prediction that can be appreciated from these simulations is that fixation itself (compared to pursuing the ball) increases the effect of the ball’s starting position on the early and average movement biases. This prediction was not examined in the present experiment, because the visual pursuit condition was not included.
We performed additional ANOVAs in which the factor Fixation had three levels (i.e., we included the data of the same participants from a condition in which the ball could be freely pursued; Dessing et al. 2009). These analyses showed that the hand movements in the present experiment were very similar to those with free viewing.
References
Andersen RA, Essick GK, Siegel RM (1985) Encoding of spatial location by posterior parietal neurons. Science 230:456–458
Arzamarski R, Harrison SJ, Hajnal A, Michaels CF (2007) Lateral ball interception: hand movements during linear ball trajectories. Exp Brain Res 177:312–323
Baraduc P, Guigon E, Burnod Y (1999) Recoding arm position to learn visuomotor transformations. Cereb Cortex 11:906–917
Batista AP, Buneo CA, Snyder LH, Anderson RA (1999) Reach plans in eye-centered coordinates. Science 285:257–260
Beek PJ, Dessing JC, Peper CE, Bullock D (2003) Modelling the control of interceptive actions. Philos Trans R Soc Lond B Biol Sci 358:1511–1523
Bennett SJ, Orban de Xivry JJ, Barnes GR, Lefèvre P (2007) Target acceleration can be extracted and represented within the predictive drive to ocular pursuit. J Neurophysiol 98:1405–1414
Beurze SM, van Pelt S, Medendorp WP (2006) Behavioral reference frames for planning reaching movements. J Neurophysiol 96:352–362
Blohm G, Crawford JD (2007) Computations for geometrically accurate visually guided reaching in 3-D space. J Vis 7–4:1–22
Blohm G, Daye P, Lefevre P (2008) The 3D geometry of visuomotor velocity transformations for pursuit eye movements. Presented at the annual meeting of the Society for Neuroscience, Washington, USA
Bock O (1986) Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Exp Brain Res 64:476–482
Brooks K, Mather G (2000) Perceived speed of motion in depth is reduced in the periphery. Vision Res 40:3507–3516
Brouwer AM, Brenner E, Smeets JBJ (2002) Hitting moving objects: is target speed used in guiding the hand? Exp Brain Res 143:198–211
Bullock D, Grossberg S (1988) Neural dynamics of planned arm movements: emergent invariants and speed-accuracy properties during trajectory formation. Psychol Rev 95:49–90
Buneo CA, Andersen RA (2006) The posterior parietal cortex: sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia 44:2594–2606
Buneo CA, Jarvis MR, Batista AP, Andersen RA (2002) Direct visuomotor transformations for reaching. Nature 416:632–636
Craig CM, Berton E, Rao G, Fernandez L, Bootsma RJ (2005) Judging where a ball will go: the case of curved free kicks in football. Naturwissenschaften 93:97–101
Crawford JD, Medendorp WP, Marotta JJ (2004) Spatial transformations for eye–hand coordination. J Neurophysiol 92:10–19
Desmurget M, Epstein CM, Turner RS, Prablanc C, Alexander GE, Grafton ST (1999) Role of the posterior parietal cortex in updating reaching movements to a visual target. Nat Neurosci 2:563–567
Dessing JC, Bullock D, Peper CE, Beek PJ (2002) Prospective control of manual interceptive actions: comparative simulations of extant and new model constructs. Neural Netw 15:163–179
Dessing JC, Peper CE, Bullock D, Beek PJ (2005) How position, velocity, and temporal information combine in the prospective control of catching: data and model. J Cogn Neurosci 17:668–686
Dessing JC, Oostwoud Wijdenes L, Peper CE, Beek PJ (2009) Adaptations of lateral hand movements to early and late visual occlusion in catching. Exp Brain Res 192:669–682
Enright JT (1995) The non-visual impact of eye orientation on eye–hand coordination. Vision Res 35:1611–1618
Flanders M, Soechting JF (1990) Parcellation of sensorimotor transformations for arm movements. J Neurosci 10:2420–2427
Flanders M, Helms Tillery SI, Soechting JF (1992) Early stages in a sensorimotor transformation. Behav Brain Sci 15:309–362
Flash T, Hogan N (1985) The coordination of arm movements: an experimentally confirmed mathematical model. J Neurosci 5:1688–1703
Gentilucci M, Daprati E, Gangitano M, Toni I (1997) Eye position tunes the contribution of allocentric and egocentric information to target localization in human goal-directed arm movements. Neurosci Lett 222:123–126
Graziano MS (1999) Where is my arm? The relative role of vision and proprioception in the neuronal representation of limb position. Proc Natl Acad Sci USA 96:10418–10421
Harris JM, Drga VT (2005) Using visual direction in three-dimensional motion perception. Nat Neurosci 8:229–233
Henriques DYP, Crawford JD (2000) Direction-dependent distortions or retinocentric space in the visuomotor transformation for pointing. Exp Brain Res 132:179–194
Henriques DYP, Crawford JD (2002) Role of eye, head and shoulder geometry in the planning of accurate arm movements. J Neurophysiol 87:1677–1685
Henriques DYP, Klier EM, Smith MA, Lowy D, Crawford JD (1998) Gaze-centered remapping of remembered visual space in an open-loop pointing task. J Neurosci 18:1583–1594
Henriques DYP, Medendorp WP, Gielen CCAM, Crawford JD (2003) Geometric computations underlying eye–hand coordination: orientations of the two eyes and the head. Exp Brain Res 152:70–78
Heuer H, Sangals J (1998) Task-dependent mixtures of coordinate systems in visuomotor transformations. Exp Brain Res 119:224–236
Ilg UJ, Schumann S (2007) Primate area MST-l is involved in the generation of goal-directed eye and hand movements. J Neurophysiol 97:761–771
Ilg UJ, Schumann S, Thier P (2004) Posterior parietal cortex neurons encode target motion in world-centered coordinates. Neuron 43:145–151
Inaba N, Shinomoto S, Yamane S, Takemura A, Kawano K (2007) MST neurons code for visual motion in space independent of pursuit eye movements. J Neurophysiol 97:3473–3483
Ivanenko YP, Grasso R, Lacquaniti F (2000) Neck muscle vibration makes walking humans accelerate in the direction of gaze. J Physiol 525:803–814
Jacobs DM, Michaels CF (2006) Lateral ball interception I: operative variables, attunement, and calibration. JEP:HPP 32:343–458
Johnston A, Wright MJ (1986) Matching velocity in central and peripheral vision. Vision Res 26:1099–1109
Khan AZ, Crawford JD, Blohm G, Urquizar C, Rossetti Y, Pisella L (2007) Influence of initial hand and target position on reach errors in optic ataxic and normal subjects. J Vis 7:8.1–16
Klier EM, Wang H, Crawford JD (2001) The superior colliculus encodes gaze commands in retinal coordinates. Nat Neurosci 4:627–632
Lacquaniti F, Guigon E, Bianchi L, Ferraina S, Caminiti R (1995) Representing spatial information for limb movement: role of area 5 in the monkey. Cereb Cortex 5:391–409
Lemay M, Stelmach GE (2005) Multiple frames of reference for pointing to a remembered target. Exp Brain Res 164:301–310
Maunsell JH, Van Essen DC (1983) Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation. J Neurophysiol 49:1127–1147
McIntyre J, Stratta F, Lacquaniti F (1997) Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space. J Neurophysiol 78:1601–1618
McIntyre J, Stratta F, Lacquaniti F (1998) Short-term memory for reaching to visual targets: psychophysical evidence for body-centered reference frames. J Neurosci 18:8423–8435
Medendorp WP, Goltz HC, Vilis T, Crawford JD (2003) Gaze-centered updating of visual space in human parietal cortex. J Neurosci 23:6209–6214
Medendorp WP, Goltz HC, Crawford JD, Vilis T (2005) Integration of target and effector information in human posterior parietal cortex for the planning of action. J Neurophysiol 93:954–962
Montagne G, Laurent M, Durey A, Bootsma R (1999) Movement reversals in ball catching. Exp Brain Res 129:87–92
Mrotek LA, Soechting JF (2007) Target interception: hand-eye coordination and strategies. J Neurosci 27:7297–7309
Neggers SF, Bekkering H (2000) Ocular gaze is anchored to the target of an ongoing pointing movement. J Neurophysiol 83:639–651
Neppi-Mòdona M, Auclair D, Sirigu A, Duhamel JR (2004) Spatial coding of the predicted impact location of a looming object. Curr Biol 14:1174–1180
Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113
Peper CE, Bootsma RJ, Mestre DR, Bakker FC (1994) Catching balls: How to get the hand to the right place at the right time. JEP:HPP 20:591–612
Pesaran B, Nelson MJ, Anderson RA (2006) Dorsal premotor neurons encode the relative position of the hand, eye, and goal during reach planning. Neuron 51:125–134
Port NL, Lee D, Dassonville P, Georgopoulos AP (1997) Manual interception of moving targets. I. Performance and movement initiation. Exp Brain Res 116:406–420
Pouget A, Ducom JC, Torri J, Bavelier D (2002) Multisensory spatial representations in eye-centered coordinates for reaching. Cognition 83:B1–B11
Price NS, Ono S, Mustari MJ, Ibbotson MR (2005) Comparing acceleration and speed tuning in macaque MT: physiology and modeling. J Neurophysiol 94:3451–3464
Riley M, Atkeson CG (2002) Robot catching: towards engaging human-humanoid interaction. Auton Robot 12:119–128
Roerdink M, Ophoff ED, Peper CE, Beek PJ (2008) Visual and musculoskeletal underpinnings of anchoring in rhythmic visuo-motor tracking. Exp Brain Res 184:143–156
Schenk T, Ellison A, Rice N, Milner AD (2005) The role of V5/MT+ in the control of catching movements: an rTMS study. Neuropsychologia 43:189–198
Schlack A, Krekelberg B, Albright TD (2007) Recent history of stimulus speeds affects the speed tuning of neurons in area MT. J Neurosci 27:11009–11018
Smeets JBJ, Brenner E (1995) Perception and action are based on the same visual information: distinction between position and velocity. JEP:HPP 21:19–31
Snyder LH, Grieve KL, Brotchie P, Andersen RA (1998) Separate body- and world-centered representations of visual space in parietal cortex. Nature 394:887–891
Sober SJ, Sabes PN (2003) Multisensory integration during motor planning. J Neurosci 23:6982–6992
Sober SJ, Sabes PN (2005) Flexible strategies for sensory integration during motor planning. Nat Neurosci 8:490–497
Soechting JF, Engel KC, Flanders M (2001) The Duncker illusion in eye–hand coordination. J Neurophysiol 85:843–854
Tanaka K, Saito H (1989) Analysis of motion of the visual field by direction, expansion/contraction, and rotation cells clustered in the dorsal part of the medial superior temporal area of the macaque monkey. J Neurophysiol 62:626–641
Trewhella J, Edwards M, Ibbotson MR (2003) Sensitivity to the acceleration of looming stimuli. Clin Exp Ophthalmol 31:258–261
Tynan PD, Sekuler R (1982) Motion processing in peripheral vision: reaction time and perceived velocity. Vision Res 22:61–68
van Beers RJ, Sittig AC, Gon JJ (1999) Integration of proprioceptive and visual position-information: An experimentally supported model. J Neurophysiol 81:1355–1364
van den Dobbelsteen JJ, Brenner E, Smeets JBJ (2004) Body-centered visuomotor adaptation. J Neurophysiol 92:416–423
van Pelt S, Medendorp WP (2007) Gaze-centered updating of remembered visual space during active whole-body translations. J Neurophysiol 97:1209–1220
Vetter P, Goodbody S, Wolpert D (1999) Evidence for an eye-centered spherical representation of the visuomotor map. J Neurophysiol 81:935–939
Volcic R, Kappers AML, Koenderink JJ (2007) Haptic parallelity perception on the frontoparallel plane: the involvement of reference frames. Percept Psychophys 69:276–286
Werkhoven P, Snippe HP, Toet A (1992) Visual processing of optic acceleration. Vision Res 32:2313–2329
Zago M, McIntyre J, Senot P, Lacquaniti F (2009) Visuo-motor coordination and internal models for object interception. Exp Brain Res 192:571–604
Acknowledgments
The authors wish to thank Betteco de Boer for her help in running the experiments. This research was supported by VENI grant 451-05-016 of the Netherlands Organization for Scientific Research (NWO), awarded to Joost C. Dessing.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
The dynamical neural network model for catching (Dessing et al. 2005) generates motor commands for catching on the basis of signals about ball position and velocity, hand position, and the time remaining before contact. Here, we describe a simplified two dimensional version of this model used to generate the predictive simulations with respect to the effect of gaze fixation (Fig. 6). The early representation of visual ball motion codes a visually delayed (i.e., by δ vis) ball position (target position vector; TPV) and velocity (target velocity vector; TVV) in spherical coordinates (with θ = 0 representing straight ahead)
where R ball represents the eye-ball distance. Similarly, the internal representation of hand position (present position vector; PPV) is
The gaze-centered movement vector (MVGC) is calculated as:
Here \( \varpi = \alpha \cdot {\text{TC}}\left( {t - \delta_{\text{vis}} } \right) \), where TC is a signal coding the time remaining to contact, and α a velocity scalar (Dessing et al. 2009; set to 1.1). The gaze-centered movement vector is transformed into a body-centered desired velocity vector, which involves multiplication with a gain (G/TC(t − δvis), Bullock and Grossberg 1988; Peper et al. 1994; G was set to 1.5) and projection onto the hand movement-axis:
κ was set to 0, implying that only the component perpendicular to the gaze direction was taken into account. The simulations involved numerical integration of this system (using ode45.m in Matlab), to continuously update the coded body-centered lateral hand position, which was used to update PPV(t). All simulated movements were initiated 0.7 s before the ball would arrive at the IP; a motor delay was implemented by delaying the model output by 75 ms. The ball trajectories used as input to the model were simulated ideal pendulums, with the same configuration as used in the experiment (see Fig. 4). An air friction term was added which depended on ball’s velocity vector to the power 1.5, the scaling of which (η = 0.11) was optimized using the measured flight times.
As announced in “Introduction”, we also simulated this model with the inclusion of exaggeration of retinal ball eccentricity and an underestimation of peripheral velocity. This involved modulations of the TPV and TVV, respectively. Following the results of Henriques et al. (1998), we set TPV2 (in degrees) to
We did not include a similar exaggeration in the coded hand position. Irrespective of the fixation direction, the hand was located at a retinal eccentricity of more than 10°, at which the retinal exaggeration saturates (Eq. 3). This means that an exaggeration of the visual hand position will not influence the predicted effects of fixation direction. More importantly, however, because the hand was largely invisible, the coded hand position may be expected to depend predominantly on proprioceptive signals, which do not show a bias (van Beers et al. 1999). These arguments hold both for the calculation of the movement vector and the position signals needed to transform this vector into motor coordinates (Sober and Sabes 2003). Unreported simulations indeed confirmed that an exaggeration of the visual hand position did not substantially influence the hand movements predicted by our model. To simulate underestimation of peripheral ball velocity (using the ‘unaffected’ TPV2 in degrees) we set TVV2 to
This implements a linear decrease in the estimated target velocity with eccentricity, up to 30% at 40º, as roughly estimated from Tynan and Sekuler’s (1982) observations for the range of velocities we used (about 0–10 s−1 in the first 0.6 s). These authors also observed an effect of actual velocity on the scaling factor; we did not implement this effect for the sake of simplicity.
The predicted movements for a model incorporating these effects are presented in Fig. 12. Both effects influence the details of the predicted kinematics. Exaggeration of retinal eccentricity adds to the effects of fixation (i.e., larger rightward and leftward biases when fixating forward and rightward, respectively, cf. Fig. 6), and peripheral velocity underestimation induces a general leftward bias, particularly for more eccentric passing positions. Importantly, however, the general predictions of our model with respect to the manipulation of fixation direction remain unaffected.
As mentioned in “Discussion”, we simulated a version of the model which included an initial underestimation of azimuthal ball motion. We implemented an imbalance in the coded ball motion by replacing the coded ball motion with:
In these simulations κ = 1, α = 0.8, G = 3, \( {\text{TC}} = - {{\left( {Y_{\text{ball}} - Y_{\text{hand}} } \right)} \mathord{\left/ {\vphantom {{\left( {Y_{\text{ball}} - Y_{\text{hand}} } \right)} {\dot{Y}_{\text{ball}} }}} \right. \kern-\nulldelimiterspace} {\dot{Y}_{\text{ball}} }} \). The kinematics predicted by this model are presented in Fig. 11 in the main text; evidently, this alternative model can account for the consistent leftward bias observed in our experiment(s). However, it should not be considered as the new version of our model, which we feel would need to include a more accurate description of the representation of target motion (which is beyond the scope of the present study).
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Dessing, J.C., Oostwoud Wijdenes, L., Peper, C.E. et al. Visuomotor transformation for interception: catching while fixating. Exp Brain Res 196, 511–527 (2009). https://doi.org/10.1007/s00221-009-1882-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00221-009-1882-6