Introduction

Our haptic senseFootnote 1 appears a crucial source of information on the structure and contents of peripersonal space. That is, it offers information on the size, shape, texture, location and orientation of objects within our reach. Intuitively, haptic perception appears quite accurate. We can efficiently handle objects without having to look at them. Remarkably, though, the literature lists various studies which have shown that for several spatial properties haptic perception is not always as reliable as often supposed (cf. Marks and Armstrong 1996; Lanca and Bryant 1995; Lederman et al. 1985; Appelle and Countryman 1986; Gentaz and Hatwell 1998, 1999).

Arguably, a rather important feature of touch involves orientation processing. It offers vital information on how to handle objects such as when reaching for them and picking them up. Under these circumstances the information may be coded with respect to an egocentric frame of reference (e.g. with respect to one’s own body). Alternatively, haptic orientation processing might also afford information on the alignment of objects with respect to each other or with respect to an external reference frame (such as the sides of the table on which the haptic tasks are performed). In that case, allocentric spatial coding is presumed to take place.

This paper gives an overview of studies on the haptic processing of spatial orientation, conducted in the department Physics of Man and Experimental Psychology at Utrecht University. A central theme concerns in how far haptic perception of spatial orientation is intrinsically linked to concurrent visual processing mechanisms.

What happens over time?

Kappers (1999, 2002) and Kappers and Koenderink (1999) systematically explored parallel setting of two bars, either bimanually or unimanually, at different locations in the horizontal plane (see Fig. 1), the frontoparallel plane, and the midsagittal plane (see also Hermens et al. 2006; Volcic et al. 2007). They observed a comparable pattern of quite robust deviations, suggesting that what feels parallel factually is far from parallel.

Fig. 1
figure 1

Participant in the parallel setting task. Left hand is inspecting the reference bar while the right hand is waiting for the signal to start the parallel setting

Most interestingly, these deviations were rather systematic: if the test bar was on the right side of the reference bar, the deviations were in clockwise direction; if the test bar was to the left of the reference bar the deviations were counterclockwise. It was argued that this reflects the fact that the hands are the prime egocentric reference frames involved in this task (Kappers 1999). Figure 2 nicely illustrates this. When a whole array of bars has to be set parallel, one can observe differences between any pair of bars. There is a gradual but systematic growth in the orientation difference with distance in space between the two bars, which reflects the preferred placement of the hand for the different bar locations. Notice that this mismatch applies to orientation in allocentric space (e.g. the table) but not to egocentric space, i.e. orientation with respect to the hand remains the same.

Fig. 2
figure 2

All bars feel parallel to each other according to the settings of a representative subject using her right hand

Intuitively, it makes sense that the haptic modality strongly relies upon an egocentric reference frame. The first stages of haptic information processing are directly tuned to which part of the body is stimulated and where the limbs are in space. However, with time there might be a transition from an egocentric coding to a more allocentric representation. Indeed, introducing a delay between input and response, varying between 5 and 8 s, has been found to lead to a more allocentric performance pattern in both visual pointing and proprioceptive pointing experiments (Milner et al. 1999; Rossetti et al. 1996; Rossetti and Régnier 1995).

In line with this, Zuidhoek et al. (2003) demonstrated that haptic orientation processing shows a similar ameliorating effect of delay. Using the parallel setting task, they found an improvement of about 30–50% with a 10 s delay (see Fig. 3). It should be noted that in the parallel setting task veridical performance and allocentric performance coincide. So ‘getting better’ means by definition interpreting the haptic orientation within an accurate, allocentric reference frame (cf. Kappers 2003).

Fig. 3
figure 3

The effect of delay for each distance condition in experiment 2 from Zuidhoek et al. (2003). The error bars show ±1.0 standard error of mean. Adapted from Zuidhoek et al. (2003)

Interestingly, Kaas et al. (2007a) in a recent neuroimaging study examined the neural circuitries underlying haptic orientation matching with variable delays. They found that the right sensorimotor cortex was most active following the first seconds after exploration. Slightly later the left anterior prefrontal cortex became more active. At the end of a 10 s delay a left parietal–occipital circuitry appeared most involved. Speculatively, the latter might be taken as an indication for visual recoding during the delay. Notably, Kaas et al. (2007a) did not find neural activity differences between orientation matching and mirroring conditions. It has been argued that correct matching performance requires allocentric coding, whereas successful mirroring can be achieved by either allocentric or egocentric coding (Kaas et al. 2006). As such, the fact that the task conditions evoke similar neural circuitries could be seen to oppose the notion of separate neural pathways for allocentric and egocentric processing of haptic space. A further comparison of haptic matching and mirroring is clearly necessary (cf. Kaas et al. 2007a)

What determines allocentric recoding?

Allocentric recoding means that one relates spatial information to external frames of reference, which are independent from an observer’s body and perspective. Typically, such reference frames are centered on arrays of objects or landmarks in the surrounding or marked boundaries such as the walls of a room. In case of touch one could use the edges of a table or raised reference lines on top of a surface (cf. Millar and Al-Attar 2004). In the haptic tasks discussed here, these cues were never available. So what then would have caused the allocentric recoding with delay? Gentaz and Hatwell (1999) suggest that retention of haptic information after perception requires effortful processing or is at least under conscious cognitive control. Zuidhoek et al. (2003) argued that such cognitive control to a major extent would involve turning the felt information into a visuospatial image.

In line with this notion, Newport et al. (2002) showed that viewing the region of space directly above the haptic workspace (so-called non-informative vision) improves parallel-setting performance. Zuidhoek et al. (2004b) used a similar setup: the non-informative vision condition was created by an opaque cloth covering the workspace (i.e. the table and bars) as well as participants’ shoulders, arms and torso. They replicated the non-informative vision advantage and additionally demonstrated that the direction of head and eyes yields an independent impact on the parallel-setting performance, with head and eyes directed to the reference bar resulting in better parallel-setting performance than when orienting straight ahead or towards the test bar (see Fig. 4).

Fig. 4
figure 4

Effects of non-informative vision and head orienting on haptic parallel setting. Neutral condition means straight-ahead head orientation. Adapted from Zuidhoek et al. (2004b)

Apparently, non-informative vision offers a visual background frame in which the haptic information can be recoded and used in a more efficient way. Directing one’s head towards the reference bar either stimulates the generation and expansion of a visual image in which the haptic input is integrated (Zuidhoek et al. 2004b) or may offer additional cues (e.g. head and limb synchronies) with which the haptic inputs can be combined (cf. Millar and Al-Attar 2004).

Corroborating the idea that visual factors might be important for haptic orientation processing, Kaas et al. (2007b) combined haptic parallel or mirror setting with congruent and incongruent concurrent visual inputs. A complex pattern of interfering and facilitating visual influences on haptic performance was observed, suggesting that a flexible weighing of the visual and somatosensory information occurs depending on the task at hand.

Training allocentric interpretations of haptic space

One question which might arise is whether erroneous performance on the haptic orientation matching task does not simply follow from participants being unused to the blindfolded test conditions. Moreover, blindfolding might evoke counterproductive anxiety and arousal. It should be noted that the observed systematicity in the error patterns counters this possibility. Also, performance has been found to remain stable even after a great many trials. Still, it would be interesting to consider to what extent haptic parallel setting might be trained, and if so, which training conditions would be most effective and stimulate allocentric coding.

In a recent, elaborate series of experiments, Kappers et al. (in press) examined the extent to which performance might be altered by training and feedback. It was found that systematic error patterns were quite consistent and only particular conditions with combined visual and haptic pretest training, and with immediate haptic and visual error feedback performance improved performance significantly but in a rather limited way. Notably, these improvements varied strongly over the individual participants. Hence, reliance on egocentric reference frame in haptic tasks is strongly prewired and the ability to change shows strong individual differences. Bearing upon the latter, Kappers (2003) reported men to be better in haptic orientation processing than women. This finding was later replicated by Zuidhoek et al. (2007).

Haptic orientation processing in the blind

Assuming that visual mechanisms indeed affect haptic processing of orientations, it is of clear interest to study how individuals with limited visual abilities fare on the parallel setting task. The literature is not fully consistent on how blind individuals compare to blindfolded sighted controls on various haptic tasks. Regarding the handling of objects and shapes, Heller (1989) observed congenitally and late blind participants to be faster than the blindfolded sighted on the matching of simple braille-sized 2-D shapes, while no differences in matching accuracy could be reported. In turn, Morrongiello et al. (1994) did not report identification speed differences between blindfolded sighted and early blind children (aged 3–8 years) for familiar 3-D stimuli. Regarding processing of haptic space, Hollins and Kelley (1988) reported that relocating items which were previously inspected led to better performance in blindfolded sighted participants than in blind participants after having moved to a new position with respect to the display. Ungar et al. (1995) also observed that blind and visually impaired children had particular difficulties with rotated object arrays. Rossetti et al. (1996) and Gaunet and Rossetti (2006) demonstrated that pointing towards targets previously felt with the other hand yielded pointing distributions in a blindfolded sighted group aligned with the target array after 8 s, while in blind subjects the main axis was aligned with movement direction, both when pointing immediately and after a delay.

Zuidhoek et al. (2004b) studied haptic orientation processing in five congenitally blind and five late blind individuals. Most importantly, Fig. 5 points out that while the late blind showed the expected delay improvement in parallel setting of bars, the congenitally blind almost completely lacked this improvement. These results support the idea that during the delay the haptic input might be transformed into a more allocentric representation which could be critically dependent on visual imagery ability and visual experience. Recently, an extended sequel study has been completed including more blind individuals as well as blindfolded sighted controls (A. Postma et al., submitted), revealing similar performance differences: early blind participants did not profit from a delay; late blind did, but the effect was strongest in the blindfolded sighted.

Fig. 5
figure 5

Parallel setting by congenitally (CB) and late blind (LB) individuals—means and SE. Adapted from Zuidhoek et al. (2004a)

Imagining angles and orientations

The foregoing supposes that limited visual imagery ability might hamper blind individuals in their processing of haptic space. More compelling evidence would of course follow from a direct test of imagery ability in the blind. In a review Kaski (2002) convincingly argued that even congenitally blind individuals can work with mental images. Still, there might be quantitative and qualitative performance differences with sighted persons which could effect certain aspects of haptic space processing (cf. Aleman et al. 2001; Cornoldi and Vecchi 2003). Thinus-Blanc and Gaunet (1997) suggested that lack of vision from early in life would lower the amount of information stored in the form of mental images and complicate executing complex computations that rely on such types of representations. Noordzij et al. (2007) conducted a study on imagery ability in blind and sighted which appears directly relevant for the current issue of haptic orientation processing. While on an auditory and visual form imagery task, there were only marginal differences between groups, sighted individuals outperformed the blind on a spatial imagery task. Importantly, this task asked participants to generate analogue images of two digitally presented clocks times and to compare the angles between the hands of the clocks for this pair of times.

As can been seen in Fig. 6, while all groups showed an inverse linear relation between angular differences and error rates, indicating continuous, analogue mental computations, the blind groups clearly had more difficulty here. This suggests a difficulty in mental imaging of spatial orientations.Footnote 2 We wish to argue that this also affects the perception and interpretation of the orientations which the hands feel.

Fig. 6
figure 6

Judging the angles between the hands of the clock of digitally presented pairs of clock times. Adapted from Noordzij et al. (2007)

Conclusions

We normally can handle objects within our reach quite efficiently by means of touch alone. For example, when picking up a book without looking, our grip quickly adjusts on basis of feeling the tactile texture and weight of the book. This might involve mostly an automatized, implicit or semi-conscious mode of processing (cf. Wippich 1991). Things might be different, though, if a more cognitive usage of the haptic information is needed. When having to place the book on our bedside table after having put out the light, our hands need to explore the surface in order to allow an deliberate decision whether there is enough space for the book. Arguably, haptic orientation matching involves explicit processing of haptic information. Matching typically involves different locations in space as well as different hands. Moreover, it is under voluntary, conscious control, and participants explicitly, though not necessarily correctly, interpret what they have felt. The present paper gives an overview of a recent series of studies from the departments Physics of Man and Experimental Psychology at Utrecht University on haptic parallel setting. We have shown that visual experience and visualization can have major effects on haptic orientation matching. Introducing a delay between inspecting a reference bar and setting a test bar leads to a surprising improvement. Maintaining the haptic information in working memory may call for recoding into a different format. This might be a more allocentric, quasi-visual representation. While it should be acknowledged that also in the visual modality egocentric reference mechanisms play a role (cf. Milner and Goodale 1995), in general visualization might be an effective strategy. In line with this speculation, offering visual background information also appears to elevate performance. Another form of evidence follows from the observation that (congenitally) blind individuals do not or to a weaker extent show this improvement in time, while in parallel to this, they appear to have limited spatial imagery abilities. Together this strongly points to an important role for visual processing mechanisms in the perception of haptic inputs. Future research particularly should examine whether it is possible to keep haptic inputs available in working memory in a purely haptic format or whether there is an automatic transformation into a visual image interpretation, and if so, what the time course of this transformation is.

Importantly, the visual influences listed above may differ with the haptic task at hand. Postma et al. (2007) investigated haptic placement of objects in the appropriate slots in a spatial array, and the verbal descriptions subjects gave of the objects and the spatial array after a number of consecutive slot filling trials. Interestingly, blind participants were faster in slot filling than blindfolded sighted participants (though they did not show a steeper learning curve), whereas visual experience in the late blind improved the verbal descriptions. It was speculated that slot filling depends more on implicit processing whereas producing a verbal description requires the ability to generate an explicit, conscious representation. For the latter, visual experience and visualization mechanisms might play a special role.