Skip to main content
Log in

Predictive strategies in interception tasks: differences between eye and hand movements

  • Research Article
  • Published:
Experimental Brain Research Aims and scope Submit manuscript

Abstract

To investigate how the sensorimotor systems of eye and hand use position, velocity, and timing information of moving targets, we conducted a series of three experiments. Subjects performed combined eye–hand catch-up movements toward visual targets that moved with step-ramp-like velocity profiles. Visual feedback of the hand was prevented by blanking the target at the onset of the hand movement. A multiple regression was used to determine the effects of position, velocity, and timing accessed before each movement on the movement amplitudes of eye and hand. The following results were obtained:

  1. 1.

    The predictive strategy of eye movements could be modeled by a linear regression on the basis of the position error and the target velocity. This was not the case for hand movements, for which there was a significant partial correlation between the movement amplitude and the product of target velocity and movement duration. This correlation was not observed for eye movements suggesting that the predictive strategy of hand movements takes movement duration into account, in contrast to the strategy used in eye movements.

  2. 2.

    To determine whether the movement amplitudes of eye and hand depend on a categorical classification between a discrete number of movement types, we compared an experiment in which target position and velocity were distributed continuously with an experiment using only four different combinations of target position and velocity. No systematic differences between these experiments were observed. This shows that the system output is a function of continuous, interval-scaled variables rather than a function of discrete categorical variables.

  3. 3.

    We also analyzed the component of the movement amplitudes not explained by the regression, i.e., the residual error. The residual errors between subsequent trials were correlated more strongly for eye than for hand movements, suggesting that short-term temporal fluctuations of the predictive strategy were stronger for the eye than for the hand.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3A–D
Fig. 4
Fig. 5
Fig. 6A, B
Fig. 7A–D
Fig. 8A–D
Fig. 9

Similar content being viewed by others

References

  • Bahill AT, Clark MR, Stark L (1975) The main sequence, a tool for studying human eye movements. Math Biosci 24:191–204

    Article  Google Scholar 

  • Becker W (1989) The neurobiology of saccadic eye movements. Metrics. Rev Oculomot Res 3:13–67

    Google Scholar 

  • Becker W, Jürgens R (1979) An analysis of the saccadic system by means of double step stimuli. Vision Res 19:967–983

    Article  CAS  PubMed  Google Scholar 

  • Blohm G, Missal M, Lefevre P (2003) Interaction between smooth anticipation and saccades during ocular orientation in darkness. J Neurophysiol 89:1423–1433

    PubMed  Google Scholar 

  • Bortz J (1993) Partialkorrelation und Multiple Korrelation. In: Bortz J (ed) Statistik für Sozialwissenschaftler. Springer, Berlin Heidelberg New York, pp 411–446

  • Brenner E, Smeets JBJ (1996) Hitting moving targets: co-operative control of ‘when’ and ‘where’. Hum Mov Sci 15: 39–53

    Article  Google Scholar 

  • Brouwer AM, Brenner E, Smeets JBJ (2002) Hitting moving objects: is target speed used in guiding the hand? Exp Brain Res 143:198–211

    Article  PubMed  Google Scholar 

  • De Brouwer S, Missal M, Barnes G, Lefevre P (2002) Quantitative analysis of catch-up saccades during sustained pursuit. J Neurophysiol 87:1772–1780

    PubMed  Google Scholar 

  • de Lussanet MH, Smeets JBJ, Brenner E (2001) The effect of expectations on hitting moving targets: influence of the preceding target’s speed. Exp Brain Res 137:246–248

    Article  PubMed  Google Scholar 

  • Gellman RS, Carl JR (1991) Motion processing for saccadic eye movements in humans. Exp Brain Res 84:660–667

    Article  CAS  PubMed  Google Scholar 

  • Hays AV, Richmont BJ, Optican LM (1982) A unix-based multiple process system for real-time data acquisition and control. Wescon Conf Proc 2:100–105

    Google Scholar 

  • Heywood S, Churcher J (1981) Saccades to step-ramp stimuli. Vision Res 21:479–490

    Article  CAS  PubMed  Google Scholar 

  • Messier J, Kalaska JF (1999) Comparison of variability of initial kinematics and endpoints of reaching movements. Exp Brain Res 125:139–152

    Article  CAS  PubMed  Google Scholar 

  • Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113

    Article  CAS  PubMed  Google Scholar 

  • Ron S, Vieville T, Droulez J (1989) Target velocity based prediction in saccadic vector programming. Vision Res 29:1103–1114

    Article  CAS  PubMed  Google Scholar 

  • Sailer U, Eggert T, Ditterich J, Straube A (2003) Predictive pointing movements and saccades toward a moving target. J Mot Behav 35:23–32

    PubMed  Google Scholar 

  • Smeets JBJ, Bekkering H (2000) Prediction of saccadic amplitude during smooth-pursuit eye movements. Hum Mov Sci 19:275–295

    Article  Google Scholar 

  • Smeets JBJ, Brenner E (1995) Perception and action are based on the same visual information: distinction between position and velocity. J Exp Psychol Hum Percept Perform 21:19–31

    Article  CAS  PubMed  Google Scholar 

  • Smeets JBJ, Hooge IT (2003) Nature of variability in saccades. J Neurophysiol 90:12–20

    PubMed  Google Scholar 

  • Straube A, Deubel H (1995) Rapid gain adaptation affects the dynamics of saccadic eye movements in humans. Vision Res 35:3451–3458

    Article  CAS  PubMed  Google Scholar 

  • van Donkelaar P, Lee RG, Gellman RS (1992) Control strategies in directing the hand to moving targets. Exp Brain Res 91:151–161

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to U. Sailer and S. Glasauer for constructive criticism on the manuscript and to J. Benson for copyediting the English text. This work was supported by the Deutsche Forschungsgemeinschaft (research unit 480). F.R. was supported by a grant of the EU (Marie-Curie training site).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Eggert.

Appendix

Appendix

In this paragraph we consider the problem that the independent variables (PE, PI, and ID) used to explain the movement amplitude depend on the assumed interval between the sampling time of the position error and movement onset. The question is how the result of the linear regression depends on the sampling time. We assume that the sampling will occur a fixed time interval τ later than 100 ms before movement onset. Thus, we consider a particular linear transformation of the vector \( \underline{x} \) of the independent variables to new independent variables \( \underline{z} \). The new position error PEτ increases with respect to the original position error by the product τ · V, while the new invisible displacement IDτ is by the same amount smaller than the original invisible displacement ID. The prediction interval PIτ is smaller than the original prediction interval PI by τ:

$$ \begin{array}{*{20}l} {{\tau \cdot V\underline{z} } \hfill} & { = \hfill} & {{{\left[ {\begin{array}{*{20}c} {{PE_{\tau } }} \\ {V} \\ {{PI_{\tau } }} \\ {{ID_{\tau } }} \\ \end{array} } \right]} = {\left[ {\begin{array}{*{20}c} {{PE + \tau \cdot V}} \\ {V} \\ {{PI - \tau }} \\ {{ID - \tau \cdot V}} \\ \end{array} } \right]}} \hfill} \\ {{} \hfill} & {{} \hfill} & {{} \hfill} \\ {{} \hfill} & { = \hfill} & {{{\left[ {\begin{array}{*{20}c} {0} \\ {0} \\ {{ - \tau }} \\ {0} \\ \end{array} } \right]} + {\left[ {\begin{array}{*{20}c} {1} & {\tau } & {0} & {0} \\ {0} & {1} & {0} & {0} \\ {0} & {0} & {1} & {0} \\ {0} & {{ - \tau }} & {0} & {1} \\ \end{array} } \right]} \cdot {\left[ {\begin{array}{*{20}c} {{PE}} \\ {V} \\ {{PI}} \\ {{ID}} \\ \end{array} } \right]}} \hfill} \\ {{} \hfill} & {{} \hfill} & {{} \hfill} \\ {{} \hfill} & { = \hfill} & {{\underline{k} + {\mathbf{Q}} \cdot \underline{x} } \hfill} \\ \end{array} $$
(4)

Regression slopes

First, we will show in general, how the result of the regression of a dependent variable on \( \underline{z} \) can be expressed by the result of the regression on \( \underline{x} \). Then in the next step we will focus on the special case given in Eq. 4.

The multiple linear regression of the dependent variable y on the independent variables \( \underline{x} \) minimizes the mean square error ε2, defined by

$$ \varepsilon ^{2} : = {\sum\limits_{i = 1}^N {{\left( {y_{i} - e_{0} - \underline{e} ^{T} \cdot \underline{x} _{i} } \right)}^{2} } } $$
(5)

in e0 and in the components of the vector \( {\underline{e} } \), which holds the regression slopes. N denotes the number of observations. The solution of this problem (see, for example, Bortz 1993) is given by

$$ \underline e = {\bf C}_{xx}^{ - 1} \cdot \underline c _{xy} ,$$
(6)

where the auto-covariance matrix C xx is defined by

$$ {\mathbf{C}}_{{xx}} : = \frac{1} {{N - 1}} \cdot {\sum\limits_{i = 1}^N {{\left( {\underline{x} _{i} - \overline{{\underline{x} }} } \right)} \cdot {\left( {\underline{x} _{i} - \overline{{\underline{x} }} } \right)}^{T} } }, $$
(7)

and the covariance vector \( \underline{c} _{{xy}} \) is defined by

$$ \underline c _{xy} = {1 \over {N - 1}} \cdot \sum\limits_{i = 1}^N {\left( {\underline x _i - \overline {\underline x } } \right) \cdot y_i } .$$
(8)

Using the solution of \( \underline{e} \) from Eq. 6, e0 can be computed as follows:

$$ e_{0} = \overline{y} - \underline{e} ^{T} \cdot \underline{{\overline{x} }} $$
(9)

Accordingly, the regression of y on \( \underline{z} \) is given by

$$ \underline{{\tilde{e}}} _{0} = {\mathbf{C}}^{{ - 1}}_{{zz}} \cdot \underline{c} _{{zy}} , $$
(10)

and

$$ \tilde{e}_{0} = \overline{y} - \underline{{\tilde{e}}} ^{T} \cdot \overline{{\underline{z} }} . $$
(11)

Since the auto-covariance matrix C zz and the covariance vector \( \underline{c} _{{zy}} \) can be expressed by C xx and \( \underline{c} _{{xy}} \),

$$\matrix{ {{\bf C}_{zz} } \hfill & { = {\bf Q} \cdot {\bf C}_{xx} \cdot {\bf Q}^T ,} \hfill \cr {\underline c _{zy} } \hfill & { = {\bf Q} \cdot \underline c _{xy} } \hfill \cr } $$
(12)

we can, while using Eqs. 6 and 12, rewrite Eq. 10 by

$$\underline {\tilde e} = \left( {{\bf Q}^T } \right)^{ - 1} \cdot {\bf C}_{xx}^{ - 1} \cdot {\bf Q}^{ - 1} \cdot {\bf Q} \cdot \underline c _{xy} = \left( {{\bf Q}^T } \right)^{ - 1} \cdot {\bf C}_{xx}^{ - 1} \cdot \underline c _{xy} = \left( {{\bf Q}^T } \right)^{ - 1} \cdot \underline e .$$
(13)

Using Eqs. 13, 9, and 4, we can rewrite Eq. 11 by

$$ \tilde e_0 = \overline y - \left[ {\left( {{\bf Q}^T } \right)^{ - 1} \cdot \underline e } \right]^{\;\,T} \cdot \left( {\underline k + {\bf Q} \cdot \underline {\overline x } } \right) = \overline y - \underline e ^T \cdot \underline {\overline x } - \underline e ^T \cdot {\bf Q}^{ - 1} \cdot \underline k = e_0 - \underline e ^T \cdot {\bf Q}^{ - 1} \cdot \underline k .$$
(14)

The Eqs. 13 and 14 show in general the relation between the regression slopes for linear transformations of the independent variables.

Now we focus on the special case of the transformation given in Eq. 4. This yields

$$ \begin{array}{*{20}l} {{{\mathbf{Q}}^{{ - 1}} = {\left[ {\begin{array}{*{20}c} {1} & {{ - \tau }} & {0} & {0} \\ {0} & {1} & {0} & {0} \\ {0} & {0} & {1} & {0} \\ {0} & {\tau } & {0} & {1} \\ \end{array} } \right]}} \hfill} & {{{\text{and}}} \hfill} \\ {{{\mathbf{Q}}^{{ - 1}} \cdot \underline{k} = {\left[ {\begin{array}{*{20}c} {0} \\ {0} \\ {{ - \tau }} \\ {0} \\ \end{array} } \right]}} \hfill} & {{ = - \underline{k} .} \hfill} \\ \end{array} $$
(15)

Thus, for this special case, the regression slopes for the new independent variables \( \underline{z} \) can be computed using Eqs. 13 and 15:

$$ \begin{array}{*{20}l} {{\underline{{\tilde{e}}} } \hfill} & { = \hfill} & {{{\left[ {\begin{array}{*{20}c} {1} & {0} & {0} & {0} \\ {{ - \tau }} & {1} & {0} & {\tau } \\ {0} & {0} & {1} & {0} \\ {0} & {0} & {0} & {1} \\ \end{array} } \right]} \cdot \underline{e} } \hfill} \\ {{} \hfill} & {{} \hfill} & {{} \hfill} \\ {{} \hfill} & { = \hfill} & {{{\left[ {\begin{array}{*{20}c} {{e_{1} }} \\ {{e_{2} - \tau \cdot {\left( {e_{1} - e_{4} } \right)}}} \\ {{e_{3} }} \\ {{e_{4} }} \\ \end{array} } \right]}.} \hfill} \\ \end{array} $$
(16)

The intercept \( \tilde{e}_{0} \) for the new variables \( \underline{z} \) is computed using Eqs. 14 and 15:

$$ \tilde{e}_{0} = e_{0} + \underline{e} ^{T} \cdot \underline{k} = e_{0} + \tau \cdot e_{3} . $$
(17)

Equation 16 shows that the regression slopes of PEτ, PIτ, and IDτ are identical to the regression slopes of PE, PI, and ID, respectively. However, as one can also see from Eqs. 16 and 17, the intercept and the regression slope of V depend critically on τ.

Partial correlations

The squared partial correlation between an independent variable x 1 and a dependent variable y is the proportion of (unique) variance accounted for by x 1 , in the presence of other independent variables x 2 , ..., x m , relative to the residual or unexplained variance that cannot be accounted for by x 2 , ..., x m . Therefore, the partial correlation between y and x 1 can be interpreted as an indicator of the relevance of x 1 for the explanation of y. In this paragraph we show that the linear transformation of the independent variables \( \underline{x} \) to the independent variables \( \underline{z} \), as defined in Eq. 4, does not affect the partial correlations of PE, PI, and ID.

To compute the partial correlation we must first consider the variance \( c_{{\hat{y}\hat{y}}} \) explained by a multiple regression

$$ c_{\hat y\hat y} = {1 \over {N - 1}} \cdot \sum\limits_{i = 1}^N {(\hat y_i - \overline {\hat y} )^2 } $$

Here \( \hat{y}_{i} \) denotes the prediction of the dependent variable y on the basis of the vector \( \underline{x} _{i} \) of independent variables

$$ \hat{y}_{i} = e_{0} + \underline{e} ^{T} \cdot \underline{x} _{i} $$

The explained variance \( c_{{\hat{y}\hat{y}}} \) can be computed using the covariance matrix C xx and the covariance vector \( \underline{c} _{{xy}} \) as defined in Eqs. 7 and 8

$$c_{{\hat{y}\hat{y}}} = \underline{c} ^{T}_{{xy}} \cdot {\mathbf{C}}^{{ - 1}}_{{xx}} \cdot \underline{c} _{{xy}} .$$

Accordingly, the variance \( \tilde{c}_{{\hat{y}}} \) explained by a linear transformation \( \underline{z} \) of \( \underline{x} \) is

$$ \tilde{c}_{{\hat{y}\hat{y}}} = \underline{c} ^{T}_{{zy}} \cdot {\mathbf{C}}^{{ - 1}}_{{zz}} \cdot \underline{c} _{{zy}} . $$

By using Eq. 12, it becomes clear that the explained variance is not affected by any linear (and invertible) transformation of the independent variables:

$$\matrix{ {\tilde c_{\hat y\hat y} } \hfill & = \hfill & {\underline c _{xy}^T \cdot {\bf Q}^T \cdot \left[ {{\bf Q}^T } \right]^{ - 1} \cdot {\bf C}_{xx}^{ - 1} \cdot {\bf Q}^{ - 1} \cdot {\bf Q} \cdot \underline c _{xy} } \hfill \cr {} \hfill & = \hfill & {c_{\hat y\hat y} .} \hfill \cr } $$
(18)

The second quantity necessary to compute the partial correlation is the variance \( d_{{\hat{y}\hat{y}}} \) explained by the independent variables x 2 , ..., x m only. If the vectors w and v are defined as

$$\underline w = \left[ {\matrix{ {x_2 } \cr {x_3 } \cr \vdots \cr {x_m } \cr } } \right],\;\;{\rm and}\;\;\underline v = \left[ {\matrix{ {z_2 } \cr {z_3 } \cr \vdots \cr {z_m } \cr } } \right],$$

then the variance explained by w or v can be expressed by \( d_{{\hat{y}\hat{y}}} = \underline{c} ^{T}_{{wy}} \cdot {\mathbf{C}}^{{ - 1}}_{{ww}} \cdot \underline{c} _{{wy}} \) or \( \tilde{d}_{{\hat{y}\hat{y}}} = \underline{c} ^{T}_{{vy}} \cdot {\mathbf{C}}^{{ - 1}}_{{vv}} \cdot \underline{c} _{{vy}} \), respectively, where C ww , \( \underline{c} _{{wy}} \), C vv , and \( \underline{c} _{{vy}} \) are defined in analogy to Eqs. 7 and 8. Thus, if v is a linear transformation of w, the same computation as shown in Eq. 18 leads to the finding that the variance explained by v is identical to the variance explained by w:

$$ d_{{\hat{y}\hat{y}}} = \tilde{d}_{{\hat{y}\hat{y}}} $$
(19)

The condition that v is a linear transformation of w is fulfilled if the first component of the first column of Q is its only non-zero component, i.e.:

$${\bf Q} = \left[ {\matrix{ {q_{11} } & {q_{12} } & \cdots & {q_{1m} } \cr 0 & {q_{22} } & \cdots & {q_{2m} } \cr \vdots & \vdots & \ddots & \vdots \cr 0 & {q_{m2} } & \cdots & {q_{mm} } \cr } } \right],$$
(20)

since in this case, it follows from \( \underline{z} = \underline{k} + {\mathbf{Q}} \cdot \underline{x} \) that

$$\underline v = \left[ {\matrix{ {k_2 } \cr {k_3 } \cr \vdots \cr {k_m } \cr } } \right] + \left[ {\matrix{ {q_{22} } & {q_{23} } & \cdots & {q_{2m} } \cr {q_{32} } & {m_{33} } & \cdots & {q_{3m} } \cr \vdots & \vdots & \ddots & \vdots \cr {q_{m2} } & {q_{m3} } & \ldots & {q_{mm} } \cr } } \right] \cdot \underline w .$$

For the special case we are interested in, Eq. 4 shows that this condition is fulfilled.

With the values of the variance \( c_{{\hat{y}\hat{y}}} \) explained by all independent variables and of the variance \( d_{{\hat{y}\hat{y}}} \) explained by x 2 , ..., x m only, we can compute the squared partial correlation \( PC^{2}_{1} \) between y and x 1 by

$$PC_1^2 = {{c_{\hat y\hat y} - d_{\hat y\hat y} } \over {c_{yy} - d_{\hat y\hat y} }},$$
(21)

where the total variance of y c yy is defined as \( c_{{yy}} = \frac{1} {{N - 1}} \cdot {\sum\limits_{i = 1}^N {(y_{i} - \overline{y} )^{2} } } \). In the literature (see, for example, Bortz 1993), the squared partial correlation is often computed by

$$PC_1^2 = {{R_{y\underline x }^2 - R_{y\underline w }^2 } \over {1 - R_{y\underline w }^2 }},$$
(22)

where \( R^{2}_{{y\underline{x} }} \;{\left( {R^{2}_{{y\underline{w} }} } \right)} \) is the squared multiple correlation between y and \( \underline{x} \) (y and w):

$$ \begin{array}{*{20}l} {{R^{2}_{{y\underline{x} }} } \hfill} & {{ = \frac{{c_{{\hat{y}\hat{y}}} }} {{c_{{yy}} }},} \hfill} \\ {{} \hfill} & {{} \hfill} \\ {{R^{2}_{{y\underline{w} }} } \hfill} & {{ = \frac{{d_{{\hat{y}\hat{y}}} }} {{c_{{yy}} }}.} \hfill} \\ \end{array} $$

The numerator of Eq. 21 is the variance accounted for by x 1 , in the presence of the other independent variables x 2 , ..., x m , and the denominator is the residual or unexplained variance that cannot be accounted for by x 2 , ..., x m . With Eqs. 18 and 19 it becomes clear from Eq. 21, that PC 1 is not affected by the linear transformation \( \underline{z} = \underline{k} + {\mathbf{Q}} \cdot \underline{x} \) according to Eq. 20.

In general it follows that any partial correlation PC j between x j and y is not affected by the linear transformation \( \underline{z} = \underline{k} + {\mathbf{Q}} \cdot \underline{x} \), if q ij =0 for all i with 1≤ i m and i j. This condition is synonymous with the condition that the regression slope e j of the independent variable x j is not affected by the linear transformation (see Eq. 13). As shown in the last paragraph for the special transformation in Eq. 1, this is the case for the independent variables PE, PI, and ID.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eggert, T., Rivas, F. & Straube, A. Predictive strategies in interception tasks: differences between eye and hand movements. Exp Brain Res 160, 433–449 (2005). https://doi.org/10.1007/s00221-004-2028-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00221-004-2028-5

Keywords

Navigation