Experimental Brain Research

, Volume 151, Issue 2, pp 158–166

Effects of object shape and visual feedback on hand configuration during grasping


  • Luis F. Schettino
    • Center for Molecular and Behavioral NeuroscienceRutgers University
    • Psychology DepartmentWilliams College
  • Sergei V. Adamovich
    • Center for Molecular and Behavioral NeuroscienceRutgers University
    • Institute for Information Transmission ProblemsRussian Academy of Science
    • Center for Molecular and Behavioral NeuroscienceRutgers University
    • 197 University AvenueUniversity Heights
Research Article

DOI: 10.1007/s00221-003-1435-3

Cite this article as:
Schettino, L.F., Adamovich, S.V. & Poizner, H. Exp Brain Res (2003) 151: 158. doi:10.1007/s00221-003-1435-3


Normal subjects gradually preshape their hands during a grasping movement in order to conform the hand to the shape of a target object. The evolution of hand preshaping may depend on visual feedback about arm and hand position as well as on target shape and location at specific times during the movement. The present study manipulated object shape in order to produce differentiable patterns of finger placement along two orthogonal "dimensions" (flexion/extension and abduction/adduction), and manipulated the amount of available visual information during a grasp. Normal subjects were asked to reach to and grasp a set of objects presented in a randomized fashion at a fixed spatial location in three visual feedback conditions: Full Vision (both hand and target visible), Object Vision (only the object was visible but not the hand) and No Vision (vision of neither the hand nor the object during the movement). Flexion/extension angles of the metacarpophalangeal and proximal interphalangeal joints of the index, ring, middle and pinkie fingers as well as the abduction/adduction angles between the index-middle and middle-ring fingers were recorded. Kinematic analysis revealed that as visual feedback was reduced, movement duration increased and time to peak aperture of the hand decreased, in accord with previously reported studies. Analysis of the patterns of joint flexion/extension and abduction/adduction per object shape revealed that preshaping based on the abduction/adduction dimension occurred early during the reach for all visual feedback conditions (~45% of normalized movement time). This early preshaping across visual feedback conditions suggests the existence of mechanisms involved in the selection of basic hand configurations. Furthermore, while configuration changes in the flexion/extension dimension resulting in well-defined hand configurations occurred earlier during the movement in the Object Vision and No Vision conditions (45%), those in the Full Vision condition were observed only after 75% of the movement, as the moving hand entered the central region of the visual field. The data indicate that there are at least two control mechanisms at work during hand preshaping, an early predictive phase during which grip selection is attained regardless of availability of visual feedback and a late responsive phase during which subjects may use visual feedback to optimize their grasp.


PrehensionHand preshapingVisual feedbackVisuomotor control


The present study focuses on the control processes underlying handshape specification and their modulation through visual feedback. To do so, we took advantage of recent technological innovations that allow the quantification of several parameters involved in the specification of hand configuration during a reach-to-grasp movement. During a reach-to-grasp movement, hand configuration in reaches to objects of different shapes evolves gradually in a target-shape dependent fashion (Santello and Soechting 1998).

While the differences between hand configurations are already detectable at the time of peak aperture (between 50% and 75% of movement time) (Santello and Soechting 1998), the moment during the movement at which the influence of object shape begins to be distinguishable has not yet been determined. In a pioneering study designed to characterize the development of hand preshaping, Santello and Soechting (1998) reported that the effect of object shape could be observed after 50% of movement time.

Previous studies of prehension have generally focused on the relationship between the thumb and the index finger (Jeannerod 1997). Furthermore, the effects of different amounts of visual feedback for the determination of hand aperture have been extensively studied (Wing et al. 1986; Sivak and MacKenzie 1992; Berthier et al. 1996; Connolly and Goodale 1999; Churchill et al. 2000). In some of these studies maximum aperture increased as visual feedback was reduced, possibly as a compensatory measure that allows for the online correction of spatial errors (Wing et al. 1986; Sivak and MacKenzie 1992; Berthier et al. 1996). Moreover, most studies have reported that movement duration also tends to increase as visual feedback degrades, especially due to longer deceleration phases (Wing et al. 1986; Berthier et al. 1996; Churchill et al. 2000).

Given that the reach-to-grasp movement is thought to consist of two semi-independent but coordinated elements (a reaching or transport phase and a grasping or manipulation phase) (Jeannerod 1981), then grasping an object without vision may result in differences in when handshapes are specified in time compared to grasping in full vision conditions. The specific timing of these differences could offer some information regarding the temporal pattern of the use of visual feedback during a reach-to-grasp movement, a point still open to debate (Desmurget and Grafton 2000).

A further point of interest is the use of information about object shape in the evolution of hand configuration regardless of visual feedback condition. It is not known when and if object shape, an intrinsic characteristic (Jeannerod 1981), is included in the preplanning of the reach-to-grasp movement. If it is the case that visual feedback is not critical during the early stages of sensorimotor behavior, then evidence of early effects of object shape on hand configuration would suggest the inclusion of shape information in the putatively pre-programmed motor commands (Carlton 1981).

In the present experiment, we contrasted the temporal evolution of hand preshaping of normal subjects during a grasping task directed towards objects of different shapes in three experimental conditions: a Full Vision condition, an Object Vision condition (only the target object was visible but not the moving limb) and a No Vision condition (neither the moving limb nor the object was visible). By noting the time during the reach-to-grasp movement at which distinct hand configurations can be differentiated, we can determine when and if visual feedback manipulations affect hand preshaping. A finding of well-differentiated hand configurations regardless of visual condition would indicate the existence of control processes which may be affected by shape during the grasping movement. Furthermore, the presence of differential effects at specific time points between conditions would suggest different rates of handshape specification.

Our results indicate that robust shape-dependent effects in hand preshaping occurred well before the 50% time point regardless of visual feedback condition, suggesting that object shape information is included in the preplanning stage of the reach-to-grasp movement. Also, the preshaping patterns in the visual condition exhibited a significant difference between object-specific comparisons well into the second half of the movement which was absent in the Object and No Vision conditions. This finding supports the idea that closed-loop strategies are used to control hand configuration at this stage of the movement. Nevertheless, our results on handshape determination in the reduced visual feedback conditions are also consistent with the notion that to a large extent hand configuration may be accomplished in the absence of visual feedback. A preliminary report of these data has been presented in abstract form (Schettino et al. 2000).

Materials and methods


Nine young-adult subjects (ages 19–35 years, four women, five men) participated in the study. All subjects were informed about the nature of the study and signed consent forms approved by the IRB of Rutgers University. All subjects were right handed and without history of any neurological disorder or impairment.


Subjects sat comfortably in front of a flat table and were presented objects of three different shapes. The objects were positioned on a presentation platform consisting of a 3.8-cm-square sectioned board protruding 14 cm from a larger base section. The table and presentation platform were painted flat black. This apparatus visually isolated the objects, which were painted light green, at a height of 15 cm from the surface of the table allowing a comfortable grasping position. All objects were approximately the same size. The objects were constructed of solid wood and measured 9.5 cm in height and 2.8 cm in thickness (average weight 70 g). The width and shapes of the objects varied as follows: a "Square" object (SQ) of regular shape whose width was 5 cm; a "Concave" object (CC) whose total width was 7 cm in which there was a triangular indentation that extended from each one of two of its corners to its center; and a "Convex" object (CX) exhibiting a triangular protrusion extending from the half point of its width to a central point 3.5 cm from it (see Fig. 1). During the experiment, both the Concave and Convex objects were presented with the indentation or protrusion on the right side. These shapes were selected from those used by Santello and Soechting (1998), who devised a set of 15 similarly sized objects varying along a dimension of convexity/concavity. The objects were painted with a light-green fluorescent paint (Seal-Krete) in order to be clearly visible in complete darkness. The background of the apparatus was uniform and dark colored to reduce visual noise.
Fig. 1.

Object shapes used in the experiment. The abscissa represents the angle of flexion/extension of the middle and ring fingers for each object grasp for one subject. The ordinate represents the angle of abduction between the index and middle fingers. Note that objects SQ and CC elicit only minimal abduction during a grasp


The CyberGlove MCP and PIP sensors were individually calibrated for each subject prior to collecting data. The output of the abduction/adduction sensors was transformed to degrees using the factory calibration parameters.

Each trial started with the subject's hand in a preset initial position (fingertips and thumb touching one another and held against the sternum). In order to normalize for differences in subjects' arm lengths, objects were placed at a distance from the shoulder equal to the length of the subject's extended arm from shoulder to wrist. All experimental sessions were videotaped for offline analysis of error patterns. An infra-red sensitive camera (Sony CCD-TR416) was used when the experimental conditions required a darkened room. During the experiment, objects were presented in a pseudo-randomized fashion.

In all conditions, subjects were given sufficient time (>2 s) to view adequately the objects before trial initiation. The subjects were instructed to maintain the initial position until they heard a tone signaling the start of the trial. Once the tone sounded, the subjects were to reach to and grasp the object at a comfortable speed. A successful grasp was defined as the thumb being positioned on the left vertical surface opposing the other four fingers. This type of grasp, referred to as finger prehension (Rizzolatti et al. 1988), was chosen due to its production of a well-defined hand configuration. The subjects were further instructed to lift the object vertically and to release it on the table surface next to the presentation platform. Total reaches per experiment were 90 (3 shapes × 3 conditions × 10 trials). If the subject did not grasp an object successfully, another trial was run in its place. Subjects practiced grasping all objects before the experiment for five to ten trials.

Reduced visual feedback conditions

The grasping procedure was essentially the same in the reduced visual feedback conditions as in the Full Vision condition. The main differences were as follows: in the Object Vision condition, the overhead lights were turned off and the experimental room fully darkened so that only the fluorescent objects were visible. This manipulation eliminated visual feedback of the subject's moving arm during the experiment. After each trial, the lights were switched on for approximately 1–2 s in order to prevent dark adaptation.

Finally, in the No Vision condition subjects were asked to close their eyes upon hearing the tone indicating the start of the trial and to immediately initiate the reach-to-grasp movement. The interval between eye closure and movement initiation was less than 1 s. An experimenter sitting in front of the subject ensured that the subjects complied as required.

These manipulations eliminated visual feedback of environmental cues as well as that of the hand or the hand and object, depending on condition. Earlier work has indicated that the removal of such information may have an effect on kinematic parameters (Connolly and Goodale 1999; Churchill et al. 2000). However, the results of Churchill et al. (2000), together with those of the majority of previous studies (Wing et al. 1986; Jakobson and Goodale 1991; Chieffi and Gentilucci 1993; Berthier et al. 1996), indicate that the removal of environmental cues does not affect kinematic parameters in an unpredictable fashion. Rather, the elimination of environmental cues appears to lie in a continuum of the amount of visual feedback available resulting in longer movement times (MT), earlier times to peak aperture (TPA) and larger peak apertures (PA). Therefore, we expected our results on kinematic parameters to reflect a compounded effect of the absence of visual cues plus that of the manipulated variable.

Data capture

The positions of the major joints of the arm were recorded in 3D space using four electromagnetic sensors (MiniBird system, Immersion, Inc.) attached with adhesive tape to the subject's arm segments at positions that were referenced to the following body landmarks: shoulder, elbow, wrist and midpoint of the dorsum of the hand. In this experiment, only the 3D displacement of the wrist was analyzed. Finger joint flexion and extension were obtained via resistive bend sensors embedded in a glove (CyberGlove, Virtual Technologies, Inc.) worn on the right hand. The flexion/extension of the metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints of the index, middle, ring and little fingers were included in the analysis as well as the abduction/adduction joints of the index-middle and middle-ring fingers (ABD). All the devices were individually connected to RS-232 serial ports of a SGI Octane/SSE workstation (SGI, Inc.). Multithreaded, high-performance drivers for both the MiniBird and CyberGlove were developed that allowed all the devices to stream data to the disk at the highest possible rate (100 Hz).


All glove sensor data were transformed into degrees for analysis. Wrist trajectories were inspected visually using a computerized system for 3D graphic analysis of human multiple-joint movements (Poizner et al. 1986; 1990; Kothari et al. 1992). Movement onset was defined as the point in which the wrist sensor reached 5% of the peak velocity during the initial acceleration phase. We did not use the 5% criterion for defining the movement offset since a trial by trial analysis of wrist motion indicated that only 60% of trials reached below 5% of peak velocity. Moreover, in only a small percentage (3%) of trials the wrist came to a full stop during the grasp (wrist velocity less than 5% of peak velocity for at least 50 ms). In a few cases when the wrist came to a full stop, finger motion continued up to the point when wrist motion changed its direction. Hence, movement offset was defined as the point in which wrist motion shifted direction during the grasping movement, corresponding to the point in time at which the object was lifted.

Wrist kinematics

We analyzed the following kinematic parameters: movement time (MT), defined as the time between the onset and the offset as described above. Subsequently we divided MT into four sections: (1) onset to peak acceleration, (2) peak acceleration to peak velocity, (3) peak velocity to peak deceleration and (4) peak deceleration to offset (Churchill et al. 2000).

Determination of time to maximum aperture

In the absence of reliable data regarding thumb position (due to technical limitations of the CyberGlove), maximum aperture must be calculated differently (but see Wing and Fraser 1983; Galea et al. 2001). The distance between the tip of the index and the perpendicular (Index Finger Displacement or IFD) is described by the equation:
$$ {{\rm{IFD}}\,{\rm{ = }}\,{\rm{l}}_{{\rm{1}}} \,{\rm{cos}}\,\alpha \,{\rm{ + }}\,{\rm{l}}_{{\rm{2}}} \,{\rm{cos}}\,{\left( {\alpha {\rm{ + }}\beta } \right)}} $$
where l1andl2 represent the length of the proximal phalanx and the compounded length of the middle and distal phalanges, respectively, and α and β represent the flexion angles relative to the unflexed (0 degrees) position of the MCP and PIP joints, respectively. We define maximum grip aperture as the largest IFD reached by the tip of the index finger from the line perpendicular to the line described by the same finger at zero degrees of bend passing through its MCP (see Fig. 2). In order to assess the reliability of this measure, we conducted a correlation analysis between IFD and grip aperture as measured by electromagnetic sensors placed on the tips of the thumb and index finger of a normal control subject during grasping. This analysis yielded a correlation of r=0.99 (P<0.0001) for ten grasping trials. In order to be able to produce the relevant comparisons, the IFD of our subjects was calculated by taking the values of l1andl2from the actual lengths of the subjects' index finger phalanges.
Fig. 2.

Determination of IFD (index finger displacement). M and P represent the MCP and PIP joints of the index finger, respectively. l1 and l2represent the lengths of the proximal and the sum of the medial and distal phalanges, respectively. O is a line parallel to the index finger when fully extended, and which passes through M. Q is the perpendicular to O, passing through M. The IFD is the distance between Q and the tip of the index finger

Discriminant analysis

Two different types of discriminant analysis were used in our analysis: Stepwise Discriminant Analysis (SDA) and Canonical Discriminant Analysis (CDA). The former (SAS STEPDISC) is a forward stepwise procedure which selects discriminating variables by establishing a statistical criterion (Wilk's lambda). At each step, the variables that reach the high values against the criterion are included into the model and those that fall under the criterion are removed. Therefore, the fact that a variable is selected indicates that it contributes significantly to the variance of the data. The result of SDA is a reduced set of variables which accounts for most of the variability included in the data.

An SDA was performed on the ten joint angles at each of the selected time points (5–95% at 10% intervals) in order to find the best discriminating joints between the hand configurations for each object at each time point. The temporal pattern of the selected joints offers, by itself, an index of the relative joint usage during grasping. Therefore, SDA-selected joints were entered into statistical comparisons as the sum of selected joints across time or as the sum of selected joints per subject at each time point.

Once a subset of the initial joints was selected (in the absence of statistically significant variables, all joints were included), CDA was performed on the selected variables to obtain a measurement of the error of hand preshaping during the transport phase of the movement and to estimate the predictive value of this measure for the final hand shape per object type. CDA produces a classification of the data into a set of previously determined categories by defining a centroid per category and calculating the (variability-weighted) distance (Mahalanobis 1936) between each data point and each of the centroids. A data point will be included by CDA into the category whose centroid is closest. A measurement of error, therefore, becomes available when data points are classified into the wrong groups (misclassifications).

Predicted effects of visual feedback on hand preshaping for specific object comparisons

Different object comparisons should follow different classification patterns in time. While we expect the availability of visual feedback to affect hand preshaping for all object comparisons, the SQ-CC comparison, due to the subtlety of its differentiation (small change in flexion, no abduction, see Fig. 1), should more strongly reflect the fact that subjects rely on visual feedback for the control of hand preshaping during the last quarter of the grasping movement.

Statistical analyses

A four-way repeated-measures ANOVA [Object comparison (CC-CX, CX-SQ, SQ-CC), Condition (Full Vision, Object Vision, No Vision), Time (5%–95%) and Complementary Classification (e.g., CC as CX and CX as CC)] was performed on the individual classification error percentages. Planned comparisons for the effects of visual feedback and object shape on hand preshaping were performed through separate three-way repeated measures ANOVAs on the classification error percentages. To examine the effect of visual feedback, Condition, Time and Complementary Classification were used as factors. To examine the effect of object shape, the factors were Object Comparison, Time and Complementary Classification. Finally, two-way repeated-measures ANOVAs were performed on the temporal parameters of the wrist kinematics and TPA with visual feedback and object shape as factors. Post hoc tests (Tukey's HSD) were employed to test for significant differences between factor means in all significant second order interactions. However, due to the general difficulty in the interpretation of complex interactions, we will limit our discussion to second order interactions.


Basic kinematic and coordination parameters

Analysis of variance revealed significant main effects of both object (F(2,16)=11.87, p=0.0007) and condition (F(2,16)=7.36, p=0.0054) on movement time (MT). Subsequent post hoc tests revealed that the Full Vision had significantly shorter movement times than either of the reduced visual feedback conditions (see Table 1). This finding suggests that without vision of the arm, subjects provide themselves with a margin of error in order to ensure efficiency during prehension. Post hoc tests also revealed significant differences in the MT depending on object shape, with the SQ object requiring the shortest MT, followed by the CX, and with the CC requiring the longest MT (Table 1). There were no significant interactions.
Table 1.

Kinematic parameters of the reach across conditions and objects (MT movement time, TPAcc time to peak acceleration, TPV time from peak acceleration to peak velocity, TPD time from peak velocity to peak deceleration, %TPA relative time to peak aperture)



Object shape



No Vis.




MT (ms)







TPAcc (ms)







TPV (ms)







TPD (ms)







Sec. 4 (ms)














In order to look more closely at the temporal structure of the movement, we divided MT into four sections (see "Materials and methods"). Repeated-measures ANOVAs for Sections 1, 2 and 3 produced no significant effects or interactions. A repeated-measure ANOVA for Section 4 of the movement did reveal significant main effects of both Object (F(2,16)=13.72, p=0.0003) and Condition (F(2,16)=10.93, p=0.0010). Subsequent post hoc tests showed that all visual feedback conditions were significantly different, with the Full Vision condition exhibiting the shortest deceleration time and the No Vision condition the longest. Furthermore, trials involving the CC object were found to have a longer deceleration time than those involving the SQ object. The Object × Condition interaction was not statistically significant.

A basic parameter reflecting the coordination between the transport and prehension components is the TPA. A repeated-measures ANOVA revealed significant main effects of Condition (F(2,16)=3.67, p=0.048) and Object (F(2,16)=5.96, p=0.011) as well as a significant interaction of Condition × Object (F(4,32)=3.87, p=0.011). Subsequent post hoc tests showed that the TPAs for the Object Vision and the No Vision conditions were significantly earlier than for the Full Vision condition, again supporting the notion that subjects provided a margin of error in opening the hand to counter the effects of reduced visual feedback.

Modulation of hand configuration

Finger joint angles during the reach-to-grasp movement present a stereotyped subject-specific profile of extension and flexion as the hand reaches peak aperture and closes in on the target object. Figure 3 shows the temporal patterns of the ten finger joints for a typical subject when grasping a CX object in the Full Vision condition. In the top eight graphs it can be observed that finger extension is gradually increased from around time 20% to approximately time 60%, when maximum aperture is reached. Following this, the fingers close, reaching a shape-dependent final configuration. The bottom two graphs show the opening of the abduction joints following a similar temporal pattern.
Fig. 3.

Angle over time traces for eight trials to a CX object in the Full Vision condition for a typical subject. Time has been normalized to unit duration. All joints included in the study are represented (MCP metacarpophalangeal joints, PIP proximal interphalangeal joints, ABD abduction/adduction joints)

Temporal patterns of joint use

We used Stepwise Discriminant Analysis (SDA) to select finger joints that contributed significantly to the variance of the data (see "Materials and methods"). A repeated-measures ANOVA conducted on the number of SDA-selected joints per time point with condition and time as factors revealed a main effect of Time (F(9,72)=41.2, p<0.0001) but not of Condition (F(2,16)=0.003, p=0.99). There were no significant interactions. Post hoc tests revealed that the earliest time points (5–35%) were significantly different from later time points (65–95%), indicating that there is an increase in the number of joints employed by subjects during the reach-to-grasp movement, regardless of visual feedback availability. The overall pattern of joint recruitment appeared to involve a linear increase in the number of joints during the first half and a plateau during the second half of the grasp.

Abduction patterns

The pattern of finger abduction during the grasping movements towards the different object shapes was quantified through abduction/adduction sensors between the index and middle fingers (ABD1) and between the middle and ring (ABD2) fingers. Figure 4 shows the abduction patterns for angle ABD1 during the Full Vision condition. As can be observed, finger abduction during grasps towards object CX increases rapidly after time 15% compared to objects CC and SQ. A repeated measures ANOVA on the angle data for ABD1 was conducted with Time, Object and Condition as factors. A significant main effect of Time (F(9,72)=3.09, p=0.003) and Object (F(2,16)=40.3, p<0.0001) was observed but not of Condition. Significant interactions of Time × Object (F(18,144)=30.37, p<0.0001) and Time × Object × Condition (F(36,288)=5.02, p<0.0001) were also observed. Subsequent repeated measures ANOVAs for each time point revealed significant main effects of Object but not of Condition at times 15% through 65% of the movement. Time points 75% through 95% showed both a significant main effect of Object and of Condition. Subsequent post hoc tests revealed a significant difference between the ABD1 angle for object CX vs. objects SQ and CC at times 25% through 95%. There were no significant differences between conditions at any time point.
Fig. 4.

Patterns of finger abduction (ABD1) during the Full Vision condition during grasping movements directed at the three experimental objects. Error bars are ±SEM

Discriminant analysis

In order to analyze the evolution of hand configuration during the reach-to-grasp movement, Stepwise Discriminant Analysis (SDA) was performed on the data matrices containing the ten (four MCPs, four PIPs and two ABDs) joint measurements at every 10% epoch from 5% to 95% of the movement (see "Materials and methods"). This manipulation allowed us to select a smaller set of variables at each time point to account for the variability of the data. As described in "Materials and methods," each of the selected angles contributes significantly to the overall variability of the hand configuration at the specific time point. A Canonical Discriminant Analysis (CDA) performed on the selected joints at each time point per subject provided us with a measurement of the continuous change in hand configuration as predictive of the target shape in the form of a classification error. As would be expected, as the hand preshaped in time, the error index obtained lower values for all object shapes. However, there were temporal differences in the preshaping patterns as a result of experimental condition and object shape.

Effects of object shape and visual feedback manipulation on the temporal patterns of hand configuration

A four-way repeated-measures ANOVA was conducted on the percentage of misclassifications for each of the three pairs of complementary classifications at each time point for all the experimental conditions. Significant main effects of object comparison (F(2,16)=8.37, p=0.003) and time (F(9,72)=93.93, p<0.0001) were found. The main effects of condition (F(2,16)=0.38, n.s.) and complementary classification (F(1,8)=0.55, n.s.) were found to be non-significant. Significant interactions of time × object comparison (F(18,144)=3.05, p=0.0001) and time × condition (F(18,144)=3.9, p<0.0001) were found but not of time × complementary classification. All other second order interactions were not statistically significant. The third order interaction of time × object comparison × complementary classification (F(18,144)=2.08, p=0.009) as well as the fourth order interaction of time × object comparison × condition × complementary classification (F(36,288)=1.85, p=0.003) were found to be statistically significant.

Figure 5 shows the total classification error data for the object comparisons pooled across visual conditions at each time point. As can be observed, error levels decrease in all data sets as the movement unfolds, showing the main effect of time. However, while the error levels for the object comparisons including the CX object drop rapidly within the first half of the motion, the error levels for the SQ-CC comparison drop noticeably later in the movement. Post hoc tests confirmed this observation. The SQ-CC comparison was found to be significantly different from the CC-CX and CX-SQ comparisons from time 15% to time 85% of the movement (represented by the shaded area in Fig. 5), with the comparisons involving the CX object showing lower classification error levels. Also, at time 35% all object comparisons were found to be significantly different from one another. At this time point, the CC-CX comparison exhibited the lowest classification error level and the SQ-CC comparison exhibited the highest classification error level.
Fig. 5.

Patterns of classification errors at 10% epochs of time during a reach-to-grasp movement across visual feedback conditions. Classification error levels decrease in all conditions as hand configuration evolves into shape-specific grasps. The shaded area indicates significant differences between the "single dimension" comparison (SQ-CC) vs. the other two object comparisons (CC-CX, CX-SQ). The asterisk indicates a significant difference between all object comparisons. Error bars are ±SEM

These results suggest that the shape of the target object affects hand configuration evolution at different stages during the reach-to-grasp movement; more specifically, that (a) the abduction "dimension" of hand preshaping was resolved early during the reach-to-grasp movement, (b) that, as predicted, the CC-CX comparison, involving the largest distance in handshape dimension space as defined by our task, was the easier to classify of the three comparisons; and (c) that the SQ-CC comparison, which depends on the flexion/extension, is resolved in a more linear fashion during the grasping movement.


The main goal of the present set of experiments was to observe and characterize the evolution of specific hand configurations during the reach-to-grasp movement and their modulation by different amounts of visual feedback. Our results indicate the presence of early mechanisms of hand preshaping dependent on object shape, regardless of visual feedback availability, as well as late "corrective" mechanisms which are thought to be dependent on the availability of vision.

Effects of object shape and visual feedback on kinematic parameters

The use of objects of different target shapes allowed us to probe the influence of target complexity on kinematic parameters. Regular objects like our SQ shape require a simple strategy regarding finger position during a reach-to-grasp motion. In contrast, finger prehension of the CX and CC shapes requires the placement of fingers in more specific locations. Our results on MT depending on object shape support this notion.

Evolution of hand configuration

Effects of object shape

We found that hand configuration was affected by the shape of the target object at different stages during the reach-to-grasp movement. Regardless of visual feedback condition, the earliest effects of target shape in our study were observed when the classification of hand configurations included the abduction dimension. The hand configuration data from trials involving the CX object as target were differentiated early on in the movement from those of CC and SQ trials. In all conditions, the classification error between the CX and the other two objects reached low levels before 50% of movement time. This is an interesting result since there are no previous reports of hand preshaping processes taking place this early during prehension.

Given the geometry of the human hand, when the fingers are strongly abducted and in opposition with the thumb, the fingertips form an arch at the same time that the palm curves as the pinkie rotates towards the thumb. Adducting the fingers eliminates such configuration (Marzke and Marzke 2000). By "cupping" the hand during CX trials and adducting during SQ trials, prehension presumably can be accomplished with a minimum of disparity in the time to contact with the object per digit. A possible explanation for this temporal constraint may be that during finger prehension, the distribution of forces along the vertical axis of the object is critical for efficient grasping.

The early selection of the type of grasp (abducted, cupped) for CX (adducted for SQ and CC) indicates that this process is target-shape dependent. Also, its occurrence in the No Vision condition suggests that information about target shape is used for the planning of the movement and it is used in grip selection. Overall, our results on the early phase of handshape determination suggest that the finger prehension grip employed in our task may be accomplished in at least two basic hand configurations, namely abducted and adducted.

Effects of visual feedback

In our experiment, the predictive control processes utilized by subjects in the reduced visual feedback conditions were sufficient to preshape their hands in a manner that was differentiable through our analysis. Contrary to this interpretation, a recent study by Santello et al. (2002) suggests that visual feedback may not result in detectable hand posture changes during the reach-to-grasp movement. However, in their study, Santello and colleagues did not place constraints on the grip used by their subjects. In our experiment we asked subjects to employ a fixed grip requiring specific finger placement, forcing them to attend to their movements. It can be argued that both types of prehension are "natural" and the differences are mainly task related. For example, careful finger placement and close attention is required for grasping fragile or easily toppled objects such as a wineglass. Conversely, unattended grasping requiring little more than position and size information (extrinsic characteristics) may be necessary when grasping a baseball or a key ring. Further experimentation will have to be conducted in order to test whether our results may be generalized to everyday tasks.

Neural substrates of hand configuration determination

According to the classical interpretation of the "two visual systems" hypothesis (Ungerleider and Mishkin 1982), object shape is processed solely by the ventral pathway. However, recent data have suggested that the dorsal pathway, generally thought to underlie visuomotor behavior, does convey shape information to motor control areas of the brain (Zeki 1993; Jeannerod 1997). Shikata et al. (1996) described neurons selective to surface orientation in the lateral bank of the caudal part of the intraparietal sulcus (IPS) of non-human primates. Such information could be used downstream to build a shape-derived representation. The IPS projects largely into area F5 of premotor cortex (ventral premotor cortex) (Luppino et al. 1999), a region identified by Rizzolatti et al. (1988) as involved in grip selection in macaques. The existence in F5 of neurons selective for particular types of grasping, especially those grips involving "delicate finger movements," supports the idea that grips may be classified into a number of basic families which can be subsequently modulated in a task-dependent fashion.

The selection of a given family for a specific grasp would presumably depend on the basic geometry of the target object as well as on its intended use by the subject. The correspondence between each grip and its preferred shape is probably established during ontogeny by an interaction between the geometry of the hand and the geometry of target objects giving rise to individual differences in joint utilization during grasping. The discriminant factor analysis reported in Santello and Soechting (1998) points to the subject specificity of hand configuration during the reach-to-grasp movement. Similar intersubject variability during grasping has also been observed in non-human primates (Gardner et al. 1999). Recent data on the development of prehension in children (Kuhtz-Buschbeck et al. 1998) indicate that there is a progression in the stabilization of two-finger prehension in both the transport and grasp components of the reach-to-grasp movement from age 4 to 12 years. Thus, one may speculate that the basic grip families are also defined during this period based on the constraints imposed by the physical environment and the geometry of the human hand, and that these include a certain amount of individual variation. In our experiment the CX object elicited the selection of a specific cupped grip both quantitatively and qualitatively different from the adducted grip used to grasp the SQ and CC objects, thereby resulting in early differentiation. Furthermore, the flexed grip used to grasp the CC object may be characterized as a modulation of the adducted grip used for the SQ object.

The present results suggest that handshape specification consists of, at least, two complementary motor control processes: an early grip selection process which determines the basic handshape appropriate for the geometry (and possibly for the orientation and even intended use) of the target object, and a slower grasp modulation process which refines the basic grip to its specific dimensions and peculiarities.


The research was supported in part by Research Grant 1 R01 NS36449-04 from the National Institute of Neurological Disorders and Stroke, National Institutes of Health, to Rutgers University. The authors would like to thank Drs. J. Soechting and M. Santello for providing them with the blueprints of the object shapes employed in this study.

Copyright information

© Springer-Verlag 2003