Advertisement

Attention, Perception, & Psychophysics

, Volume 77, Issue 4, pp 1200–1211 | Cite as

Tactile search for change has less memory than visual search for change

  • Takako YoshidaEmail author
  • Ayumi Yamaguchi
  • Hideomi Tsutsui
  • Tenji Wake
Article

Abstract

Haptic perception of a 2D image is thought to make heavy demands on working memory. During active exploration, humans need to store the latest local sensory information and integrate it with kinesthetic information from hand and finger locations in order to generate a coherent perception. This tactile integration has not been studied as extensively as visual shape integration. In the current study, we compared working-memory capacity for tactile exploration to that of visual exploration as measured in change-detection tasks. We found smaller memory capacity during tactile exploration (approximately 1 item) compared with visual exploration (2–10 items). These differences generalized to position memory and could not be attributed to insufficient stimulus-exposure durations, acuity differences between modalities, or uncertainty over the position of items. This low capacity for tactile memory suggests that the haptic system is almost amnesic when outside the fingertips and that there is little or no cross-position integration.

Keywords

Active touch Visual search Change detection Working memory 

It is known that 2D images that are immediately interpretable by sight are not easily recognized by touch (Heller, 1989; Kennedy & Fox, 1977; Lederman, Klatzky, Chataway, & Summers, 1990; Magee & Kennedy, 1980). Loomis, Klatzky, and Lederman (1991) have offered several explanations for the relative disadvantage of tactile picture perception compared to visual perception. For example, the tactile field of view is narrower than it is for vision, even with the use of multiple fingers (Loomis et al., 1991). Temporal and spatial acuity also is poorer in touch than vision (Loomis & Lederman, 1986). Uncertainty and distortion associated with the kinesthetic monitoring of the hand and fingers seems to be greater compared with the eyes (Balakrishnan, Klatzky, Loomis, & Lederman, 1989; Klatzky & Lederman, 1987). Finally, some of the stimulus materials derived from visual studies may be less familiar in terms of touch (Heller, 1989; Kennedy, 2000; Kennedy & Fox, 1977; Lederman et al., 1990).

In addition to these differences in processing, there are other possible limitations associated with higher-order processing related to working memory that enable perceptual integration over time to build up a coherent perception (Hochberg, 1986). Because tactile perception is normally believed to rely more on sequential exploration and local sampling than on vision, the subject must rely more on working memory. Working memory “stores” the current sensory information and updates and integrates that information with the kinesthetic information derived from the hand and fingers with every new sensory movement. The capacity of tactile working memory has not been studied as extensively as that of visual memory (Gallace & Spence, 2009; Hill & Bliss, 1968). In particular, research on visual sampling during active exploratory processes (e.g., saccadic eye movements and visual searches) suggests that humans can accumulate an item memory across eye movements that equals working-memory capacity (approximately 4 to 7 items; e.g., Irwin, 1991; Irwin, 1992a; Irwin & Gordon, 1998). An increasing number of research studies have suggested that humans can hold a fairly limited number of items in memory for visual integration across search behavior. However, a number of questions remain. What about tactile integration? How many items are held during an active tactile search process? How much information is made available for the higher-order sensory-motor integration process resulting in a coherent perception? Is it possible to show that there is little or no integration across hand movements to account for the relative disadvantage of 2D tactual picture perception compared to vision?

To answer these questions, we assessed the amount of information that can be held in working memory across sensory movements during active tactile scanning and compared it with visual memory capacity under similar circumstances. We used a tactile version of the “visual search for change” task (Rensink, 2000), which has been used to assess representations in visual short-term memory or visuo-spatial working memory during visual exploratory behavior. The task involved the coupling of a serial memory-comparison task called “change detection” (e.g., Luck & Vogel, 1997) and a serial search in a visual search paradigm (e.g., Treisman & Gelade, 1980). Participants were shown two slightly different displays in an alternating pattern. They freely searched for the difference between the two displays. If they have the items from the previous display in memory, they can detect the change. On the other hand, if they do not have any memory of the items from the previous display, they will not detect the difference. Using this task, Rensink (2000) assessed the number of items in memory during an active visual search and suggested that the number of items that could be held equaled the upper limit of their visual working-memory capacity when the stimulus exposure duration was sufficiently long. Our main interest was whether tactile memory during tactile search also was limited. To our knowledge, this is the first attempt to assess the amount of memory available for sensory-motor integration during “active” tactile exploratory behavior (for tactile change blindness for stimuli “passively” presented on the body surface, see Gallace, Tan, & Spence, 2006). Through these experiments, we discuss the validity of memory-based accounts of tactile integration across hand movements, and the validity of the method used in Rensink (2000) to assess memory during active search behavior.

Experiment 1

Experiment 1 was designed to replicate the findings from visual experiments in tactile and visual modalities with different stimuli and participants. A schematic representation of the experiment is shown in Fig. 1.
Fig. 1

A Example of a display pair in the tilt-change condition. One of the items in “b” differs in direction from the corresponding item in “a.” B Schematic representation of the stimulus and procedure. Each display in Figure 1A alternated within each trial. Each stimulus color is analogous to the 5 Hz and 20 Hz temporal frequencies in the tactile condition. C Tactile stimulator and stimulus array

Method

Participants

Twelve undergraduate and graduate students volunteered to participate in this experiment. Their ages ranged from 18 to 23 years. They had normal or corrected-to-normal visual acuity, color vision, and tactile sensation.

Apparatus

In all the experiments reported below, a DOS/V PC with Windows 2000 controlled the display presentation and data collection. For the tactile conditions, participants put their dominant hand on the display, which consisted of an array of reeds that could vibrate independently; for the visual conditions, they viewed a CRT. For tactile stimulation, a piezoelectric bimorph reed stimulator tablet (KGS Corporation, prototype) was used. A 40 × 56 matrix of reeds on the stimulator was arranged in 3-mm bins. It was mounted on the desktop so that the surface was parallel to the horizontal plane. Each reed was 1.3 mm in diameter and flattened with round tips that directly touched the skin. The effective stimulation size was approximately 120 × 170 mm. During activation, the tips of the stimulators protruded from small holes in the flat surface according to a certain temporal frequency. The vertical displacement of each reed from the flat stimulators was 0.14 mm at 100 volts. For response collection in the tactile condition, the apparatus included a four-button response box, but only two of the buttons were used. Participants wore an eye mask and always used headphones to mask auditory cues potentially generated by the vibrator.

For visual stimulation, a CRT (Sony, 19 inch) was used for the visual display. Participants viewed the display from a distance of 60 cm. It occupied 27.5 by 36.0°, but stimulus presentation was restricted to 24.5 by 32.0°. For response collection, a two-button mouse connected to the PC was used. A chin rest was used to stabilize the participant’s head.

Stimuli

In both of the modality conditions, displays were composed of two, six, or ten rectangular bar-like items in a majority of the conditions. These were positioned at random locations. The density of the items on the display was moderately controlled between conditions so that the average inter-item distance was approximately the same for all the displays.

In both modality conditions, participants were presented with two similar stimulus arrays that alternated after brief intervals (the visual condition is shown in Fig. 1). The task was to detect a difference between the arrays. The stimulus arrays were vertical or horizontal bars defined by two different features. For the tactile condition, the bars were 24 by 6 mm (8 by 2 pins). Reeds in the background and blank displays did not vibrate. The amount of vertical displacement of the reeds was adjusted so that subjective pressures from the two vibration frequencies (5 and 20 Hz) were kept equal. In visual conditions, items were 0.29 by 3.34° rectangles. The CIE coordinates for the red color were x = 0.57 and y = 0.36 and those for the green color were x = 0.34 and y = 0.52. They were equiluminant (L = 22.6 cd/m2). The color of the blanks and the display background was always black (L = 4.8 cd/m2). After showing the first set of rectangles for a fixed stimulus-exposure duration, the display was blank for a 200-ms interval (ISI). The next set of rectangles then appeared for similar stimulus-exposure duration, followed by a blank field for the ISI. In each cycle, the display returned to the first set of rectangles and the entire display sequence was repeated until the participant responded. Stimulus durations were 200, 400, 640, or 800 ms.

The difference between the displays was that a feature value (i.e., tilt direction, color, or vibration frequency) of one of the items changed. For example, the tilt direction of the item changed from the vertical to the horizontal direction. To detect this change, participants had to hold the previous display in memory during the gap period and compare it to the next display until they found the target. The particular pairs of stimulus colors, vibration frequencies, and tilt directions were chosen because they are highly discriminable within a modality.

Procedure

There were two conditions in each of the modalities: the vibration-frequency/color-change condition, and the tilt-direction-change condition. In the vibration-frequency/color condition, one of the items alternated its vibration frequency or color between trials. In the direction-change condition, the direction of the rectangle was altered (i.e., vertical or horizontal). As in the classic visual search paradigm, participants’ reaction times for detecting the target were collected and plotted against the set size (i.e., the number of items in the display). The stimulus-exposure duration also was manipulated to determine whether the results were due to insufficient exposure duration or to the upper limit of memory capacity (Rensink, 2000). The number of the items held in memory during the task was assessed from the search functions using the method described in Rensink (2000).

The participants’ task was to search for a changing target of a certain stimulus characteristic (e.g., color) and to report one of its qualities about the other stimulus characteristic (e.g., vertical or horizontal) by pressing one of the buttons. For example, for the color/vibration-change detection conditions, they pressed the right or left button if the changing item was a vertical or horizontal line, respectively. For the tilt-direction change conditions, they pressed the right button if the changing item was red or 5 Hz and the left button if the changing item was green or 20 Hz. For visual conditions, participants reported the target by pressing one of the two mouse buttons with their dominant hand. For tactile conditions, participants reported the target by pressing an appropriate button with the nonstimulated hand.

The target positions varied independently and randomly from trial to trial. The numbers of items in the displays and the changing target characteristics were fixed during trial sessions. Participants completed three blocks of 30 trials per condition.

For tactile conditions, to make participants’ active exploratory behavior on the tablet to be more like visual searches and saccadic eye movements in typical visual change-detection studies, the stimulation area and stimulus size were designed so that the participants could not cover all the stimuli simultaneously with their hand. Thus, if the participants were required to check all the items on the tablet, they had to move their hand. The task involved an unconstrained search; therefore, participants could touch the tablet with all their fingers and their palm. Each participant was allowed to devise their own strategy for touching the tablet (e.g., which fingers to use, how many fingers to use, and whether to use their palm). The initial hand position was not designated, but the participants put their palm or index finger on the surface of the tablet at the beginning of each trial.

On all the trials, the display contained a target. In a pilot study, we also tried to replicate Rensink (2000) by using a target present-absent task. In that procedure, half of the trials contained the target, and the participants were required to indicate whether a target was present or absent. In that task, participants reported the “target absent” in almost all of the trials in the tactile conditions. It seems that the tactile condition was much more demanding than the visual condition, and it was difficult for the participants to find the target. Therefore, in this study, we modified the task.

During all the trial sessions in the current study, the participants underwent articulatory suppression (Baddeley, Lewis, & Vallar, 1984; Gilson & Baddeley, 1969) concurrently with the search task in order to reduce the use of any phonological encoding strategies in verbal working memory. During the trials, they heard a tone every second. In synchrony with the sound, they were required to vocalize repeatedly the sound “za.” Generally, the sound “tha” is used for articulatory suppression. However, because there is no consonant “th” in Japanese, “z” was used instead.

Data analysis

In all of the experiments reported below, observations on incorrect trials (<9 %) and any outliers greater than ±3 SD (<6 %) were discarded. Error bars in the search slopes in this article depict the SD of all the observations, and those in other figures show SD from individual subjects.

Results and discussion

Figure 2 shows the reaction time data from all the conditions in Experiment 1. For all of the stimulus-exposure-duration conditions, reaction times increased linearly with set size. The r2 value, which indicates a good correlation between set size and reaction time, was ≥0.94 for all the conditions. These results suggest that all of the participants’ search processes were serial or inefficient (Treisman & Gelade, 1980) for both the visual and tactile modalities. There could be differences between these modalities for this particular task and stimuli, and a comparison between strategies for hand movements versus saccadic eye movements may be informative. However, recording and comparing these “overt” acts of directing sense organs towards a stimulus source will not illustrate how the brain accumulates sensory information from different spaces and time. Because humans can also shift covert attention, the act of mentally focusing on one of several possible sensory stimuli is relatively independent of the sense organs for both vision and touch. Furthermore, the contribution of attention to the ability to detect change has been repeatedly discussed (e.g., Rensink, 2000).
Fig. 2

Search functions for visual and tactile conditions. The left and right panels show the results from the visual and tactile conditions, respectively, and the top and bottom panels show results from the color/frequency-change and tilt-direction-change conditions, respectively

Visual search is now a standard research paradigm for investigating overt attention processes. In the current study, we did not record participants’ overt exploratory behavior (e.g., scan paths of saccadic eye movements and traces of hand movements). However, the results presented in Fig. 2 provide strong evidence the covert search processes were quite similar across the visual and tactile modalities, and that the searches were serial (i.e., inefficient). Furthermore, at least for the serial exploratory behavior of checking each item in turn, the potential difference between vision and touch was not evident. We did not record participants’ hand movements. Their typical exploratory behavior was to touch the tablet with their fingers and/or palm, to determine where the items were, and then to check the particular item they were interested in with the tip of their index and/or middle finger. All of the participants actively explored on the tablet, and none attempted to keep their hand still during the testing.

Search slopes for the visual condition were shallower than those for the tactile condition, suggesting that the change detection in the visual modality was more efficient than in the touch modality. Reaction times were also much shorter in the visual than the tactile condition. This observation also suggests that the tactile search was more demanding and inefficient than the visual search.

We employed a “slope-hold analysis” (Rensink, 2000) to examine whether our results reflect the capacity of working memory or processing speed. From the search functions in Fig. 1, we estimated search slopes and the average number of items held in memory across a temporal gap using the following formula (Rensink, 2000):
$$ \mathrm{items}\kern0.5em \mathrm{in}\kern0.5em \mathrm{memory}=\left(\mathrm{stimulus}\kern0.5em \mathrm{exposure}\kern0.5em \mathrm{duration}+\mathrm{I}\mathrm{S}\mathrm{I}\right)/\mathrm{search}\kern0.5em \mathrm{slope} $$

According to Rensink (2000), if the stimulus exposure duration is longer, a greater number of items can be held and compared in memory. As a result, the search process becomes faster up to the limit of the memory capacity (Rensink, 2000). When working memory reaches its capacity limit, the items held in memory stop increasing with the stimulus exposure duration.

Figure 3 shows the results of the analysis. The results indicated that for the tactile condition, the number of items in memory was always fewer than one, even if the stimulus-exposure duration was 800 ms. Linear hold functions to set size in those two conditions (r2 = 0.99 and 1.0 for the frequency-change and tilt-change conditions, respectively) implied that the comparison processes in those tactile conditions depended on the stimulus-exposure durations.
Fig. 3

The estimated number of items held in memory from the functions in Figure 2, based on a “slope-hold” analysis (Rensink, 2000). Open symbols depict the color-frequency-change condition and filled symbols depict the tilt-change condition. Circles depict the visual task and squares depict the tactile task

Alternatively, for both visual conditions, our results showed better visual memory. The tilt-direction-change condition did not show a linear trend (r2 = 0.61 and 0.92 for the tilt-direction-change and color-change conditions, respectively), reflecting a plateau at longer stimulus exposure durations (=6.71 items). This result has been interpreted as an upper limit for working memory (Rensink, 2000). Therefore, it seems that visual, but not tactile, working memory accumulates multiple items during the search process when the stimulus-exposure duration is sufficiently long.

Although these results show that we successfully replicated Rensink (2000), it remains unclear whether we can generalize the upper limit of tactile and visuo-spatial working memory, or whether these two memory types share the same limitations. However, if the capacity limitation of the visuo-spatial working memory can predict some aspects of tactile working memory, then the results from tactile modality are not the result of the participants being a unique population with a poor working-memory capacity compared with the average. In other words, the poor memory observed in the tactile condition reflects the nature of tactile memory rather than factors particular to these participants.

Experiment 2

Another characteristic of the items in our display was the position. To further generalize the results of Experiment 1, we examined position-change detection to determine the number of items in position memory during active haptic scanning. There have been previous discussions indicating that visual memory may contribute to the visual stability of saccadic eye movements (e.g., Irwin 1992b). Following from that discussion, it appears that position memory is useful for achieving position constancy of the touched object in an environment-based coordinate manner in order to deal with hand movements. Here, two types of position change were tested. In one condition, the target was an item that altered its absolute position between displays while the distracters maintained their position. In the other condition, all the items except for the target kept their relative position, but shifted horizontally. The latter condition was to replicate the displacement of the entire display on the skin surface in relation to a horizontal hand movement without any actual hand movements. In other words, Experiment 2 tested position memory for the displacement of the entire display on the skin’s surface with and without hand movements.

Method

Participants

Twelve undergraduate and graduate students volunteered to participate in this experiment.

Stimuli

Displays were composed of three, six, eight, or ten items. Two slightly different displays were presented on each trial. One display was the original and the other was modified from the original on some characteristic. For the tactile conditions, in the absolute-position-change condition, one item in the modified display was displaced 18 mm (6 pins) horizontally to the left or right of its original position. In the relative-position-change condition, all items except the target were displaced 18 cm (6 pins) to the right or left of their original positions. For the visual conditions, in the absolute-position-change condition, one item in the modified display was displaced 1.7° to the left or right of its original position. In the relative-position-change condition, all of the items except the target were displaced 1.7°.

Procedure

The participants’ task was to search for a target that changed positions across the displays and to report the tilt direction of the target by pressing one of the two buttons as quickly as possible.

Results and discussion

Figure 4 shows the reaction time data from all conditions in Experiment 2. In all of the stimulus-exposure-duration conditions, reaction times increased linearly with set size. The r2 values were ≥0.93 for all the conditions, indicating a good correlation between set size and reaction time. These results suggest that the participants used a serial, or inefficient, search process in both the visual and tactile modalities, and in both the relative- and absolute-position-change detection tasks. Furthermore, the serial (inefficient) search behavior generalized to both non-position information (e.g., color, vibration frequency, and shape) and position information for the changing item for both vision and touch.
Fig. 4

Reaction times from all the conditions in Experiment 2. The left and right panels depict the visual and tactile conditions, respectively, and the top and bottom panels depict the absolute-position-change and the relative-position-change conditions, respectively

Figure 5 shows the results of the hold analysis described in Experiment 1. There were linear functions for both visual and tactile modalities in both the relative- and absolute-change conditions. In visual-change conditions, r2 = 0.91 and 0.95 for the absolute and relative positions, respectively. In tactile-change conditions, r2 = 0.87 and 0.97 for these positions, respectively.
Fig. 5

Results of the hold analysis in Experiment 2. “Hold items” on the y-axis refers to the estimated number of items held in memory. Circles and squares depict the visual and tactile conditions, respectively. Open symbols depict the absolute-position condition and filled symbols depict the relative-position condition

The comparison between relative and absolute position change in visual conditions indicates that a larger number of items are held in memory for the absolute-position-change condition. This suggests that it was slightly easier for the participants to remember items from the display in the absolute-position-change conditions. However, in the tactile-change conditions, the comparison between relative and absolute position changes did not show a clear divergence. The latter change condition was designed to replicate the displacement of the entire display on the skin’s surface relative to a hand movement without any actual hand movements. Therefore, this result suggests that any involvement of the efferent copy of the hand movement cannot be a large influence on our results.

In Experiment 1, it was possible for the participants to use a strategy for each stimulus in which they could keep the tip of their index finger on one stimulus and wait for the comparison stimulus to appear. This strategy could account for the small amount of working memory during all the experiments in the tactile conditions. However, in the relative-position-change condition of Experiment 2, almost all of the items in each display were constantly changing position, so the participants did not rely on such a strategy. The comparison between the visual and tactile conditions showed that tactile search for change involved much less memory than visual search for change. The tactile system has very limited access to either absolute or relative item-position information during active tactile scanning. These results suggest that the small amount of estimated memory for change during a tactile search is not specific to memory for vibration frequency and tilt direction, but can be generalized to position memory.

Experiment 3

There are several potential explanations for the different results from the vision and touch modalities observed in Experiments 1 and 2. Experiments 35 examined these possible alternative accounts. First, it is possible that the stimulus-exposure duration was insufficient for tactile modality to stabilize memory. In addition, it is essential to optimize stimulus parameters so that the memory storage between vision and touch is equalized. Thus, in Experiment 3, we extended the stimulus-exposure durations up to 5000 ms for the color/frequency- and tilt-change conditions to determine if prolonged stimulus-exposure durations increase the estimated number of items held in memory for the touch modality.

Method

Participants

Twelve undergraduate and graduate students volunteered to participate in this experiment.

Stimuli and procedure

The stimuli and procedure were the same as in Experiment 1 with the following exceptions. For both of the modality conditions, the displays were composed of two, six, or ten rectangles. Stimulus durations were 200, 400, 640, 800, 1000, 1200, 1500, 2000, 3000, and 5000 ms.

Results and discussion

Figure 6 shows the results of the hold analysis. The number of items held in memory slightly increased as the stimulus-exposure durations increased for both conditions. The results from the frequency-change condition did not reach the upper limit of working memory (i.e., the “magical number 7 ± 2,” or “4 ± 1”; Cowan, 2001; Miller, 1956) and neither of the two conditions showed an asymptote (r2 = 0.96 and 0.98 for the frequency-change and tilt-direction-change conditions, respectively), suggesting that duration itself is not the main cause of a small tactile memory storage capacity. These results clearly show that the large difference between the number of visual versus tactile items held in memory in Experiments 1 and 2 is not the result of insufficient stimulus-exposure durations in the tactile conditions.
Fig. 6

Results from tactile conditions with relatively long stimulus-exposure durations. “Hold items” on the y-axis refers to the estimated number of items held in memory

Experiment 4

Another consideration is that tactile legibility is generally believed to have lower spatial resolution than vision (Loomis, 1981a; 1981b; 1982), and this could be the source of the observed differences between the two modalities (Loomis, 1990). To test this possibility, we conducted a visual experiment with healthy participants under simulated low-vision conditions. The participants observed a blurred image through a thin film that optically cut off the high frequency component of the image. Several visual acuities were tested with this technique.

Method

Participants

Six undergraduate and graduate students volunteered to participate in this experiment.

Apparatus

Five different types of occlusion foils were used to present optically blurred images. Their estimated optical property was equal to the visual acuities of 1.0, 0.6, 0.3, 0.1, and 0.06 using the Landolt ring test. The foils were attached to glasses that the participants wore while viewing the visual displays.

Stimuli and procedure

The displays were composed of 2, 6, 10, or 12 items; other procedural parameters were the same as in the visual conditions of Experiment 1. The order of the visual acuity conditions was counterbalanced between participants.

Results and discussion

Figure 7 shows the results of the analysis. In both of the change-detection conditions, for all of the visual acuity conditions, the estimated number of items held in memory increased as the stimulus-exposure duration increased. Compared to Experiment 1, the estimated number of items held was larger, reflecting the longer stimulus-exposure durations used. In the tilt-direction-change condition, the plateau observed in Experiment 1 and in Rensink (2000) was not observed. In the color-change condition, some of the participants reported that they used a strategy to group items sharing the same color into one large shape, which made it easier to detect the color change in conditions with lower visual acuity. Consequently, the estimated number of items held was apparently larger than the upper limit of the working memory (i.e., the “magical number 7 ± 2,” or “4 ± 1”; Cowan, 2001; Miller, 1956; 1994). However, in both of the change-detection conditions, the simulated low spatial resolution levels did not dramatically reduce the number of items held in memory, suggesting that resolution difference is not responsible for the observed differences between tactile and visual search slopes. A repeated-measures ANOVA on the five visual acuities at four ISIs also confirmed no significant main effect of visual acuity and no interaction. The main effects were F(4, 20) = 4.12, MSE = 72.95, p = 0.531 for the tilt-change condition, and F(4, 20) = 0.74, MSE = 5.98, p = 0.576 for the color-change condition. These results suggest that acuity does not influence the estimated number of items held in memory during a search with these types of displays. This implies that the small estimated tactile memory observed in Experiments 1 and 2 cannot be attributed to acuity differences between the vision and touch modalities.
Fig. 7

Results from visual conditions with simulated low visual acuities. The left and right panels depict the results from the visual tilt-direction-change-detection and the visual color-change-detection conditions, respectively. Error bars represent SDs

Experiment 5

Finally, visual exploration is believed to cover both local and global features, whereas haptic exploration primarily covers local features (Lakatos & Marks, 1999). It is therefore possible that increased uncertainty about item locations outside of the hand and fingers increased the need for exploration relative to vision, and this “where-to-move-the-hand” decision process depleted the working memory resources available for holding items in memory. To test this possibility, we presented visual cues for relative item locations during the tactile search.

Method

Participants

Twelve undergraduate and graduate students volunteered to participate in this experiment.

Apparatus

For visual stimulation, the same CRT display as in Experiment 1 was used. It was placed in front of the participant, and the tactile stimulator was set between the participant and the CRT display on the same table. Therefore, the visual display was facing the participant vertically, and the tactile display was parallel to the horizontal plane. This meant that the visual display and the tactile stimulator did not share the same space. In this condition, participants did not use an eye mask. A wooden box-shaped cover was used to hide the participants’ hands and the tactile stimulator.

Stimuli and procedure

The tactile stimuli and experimental procedure were the same as in Experiment 1 with the following exceptions. Stimulus exposure durations were relatively longer at 1000, 1200, 1600, and 2000 ms. Visual presentations included white dots (1.0 by 1.0°, L = 55.1 cd/m2) that indicated the relative location of the items on the tactile display. The participants viewed the CRT monitor while their hands and the tactile stimulators were hidden under the box. The presentation timing and duration of the visual and tactile stimuli were synchronized.

Results and discussion

Figure 8 shows the results. As shown in previous experiments, for both of the conditions, the number of items estimated to be held in memory increased as the stimulus-exposure duration increased. Moreover, the addition of visual cues led to a slight enhancement of the number of items held in memory. However, the number of items in memory was still less than that for the visual single modal search in Experiment 1, and the function still maintained some linearity (r2 = 0.81 and 0.95 in the visual-cue and no-visual-cue conditions, respectively). These results suggest that increased uncertainty regarding the location of items when outside the hand and fingers cannot explain the large differences observed between the vision and touch modalities in Experiments 1 and 2.
Fig. 8

Results from tactile conditions with (open squares) and without (filled circles) visual cues for the relative stimulus positions

General discussion

Our results indicated that tactile search for change is associated with less memory than visual search for change. These results can be generalized to position memory. Moreover, insufficient stimulus-exposure durations for tactile stimuli or legibility differences between vision and touch cannot explain these results.

These findings can be regarded as new evidence accounting for the disadvantage of tactile 2D picture perception relative to the visual domain. A smaller tactile memory during the exploratory process means that when humans are touching with the fingertips, that is all that is held in memory, and that the haptic system is almost amnesic when operating outside of the fingertips.

Is it nonetheless possible to achieve coherent tactile perception? It would be more prudent to avoid heavy reliance on haptic perception given that it has such an extremely limited amount of memory. One possible way of accounting for the current results is that haptic integration relies more on the memory of “relative” traces of movements of body parts (e.g., hands and fingers) and not on the memory of the materials or sensory-driven position information. For example, Magee and Kennedy (1980) proposed that passive hand-movement guidance is sufficient for participants to achieve appropriate 2D picture perception, suggesting that the information used to identify 2D raised-line drawings is predominantly kinesthetic (see also Richardson & Wullemin, 1981). If so, the haptic system may integrate movement information and material information only when it is needed. Moreover, there can be some minimal integration between memory and hand movements.

Another possibility is that the change-detection task used in the current study does not properly assess scene representation during the exploratory process, at least for touch. For example, frequent hand movements during tactile exploratory behavior lead to frequent movements of the touched materials on the surface of the skin. One visual study reported that the estimated visual memory could be reduced to one or two items when materials to be memorized or objects move on the retina and are then integrated as a coherent object (Saiki, 2003). Alternatively, it is possible that visual scene encoding can benefit from peripheral vision in a way that haptic encoding cannot. Consequently, some aspects of peripheral objects may be encoded and compared via attention prior to saccadic eye movement and foveation (Henderson & Hollingworth, 2003). In addition, if tactile modality holds multiple items in memory but this information do not boost the search process for some reason (e.g., due to an extremely small field of view to detect a change), this would lead to an underestimation of the number of items held in memory. There is a general but slight decline in tactile working-memory capacity compared with visual-memory capacity in sighted subjects (Bliss & Hämäläainen, 2005; Mahrer & Miles, 2002), but there have not been any previous reports indicating modality differences as large as those reported here. Accordingly, further research is required.

Vision is an active sense with greater similarities to the exploring hand than to a static picture (Gibson, 1966; O’Regan, 1992). Indeed, recently a vast amount of research on visual-change detection has suggested that humans’ wide, colored, rich, and detailed subjective visual field is not fully represented and integrated in the brain as “pictures in the head” (Irwin, 1991; 1992a; 1998). Instead, humans only know how to move the fovea or attention to locally “grasp” an object to perceive it consciously (Simons & Rensink, 2005; Wolfe, 1999). This is analogous to how the hand can be moved to explore an object in an environment. However, given that the precise relationship between a change-detection task and haptic perception of a 2D object remains an open question, we have provided one example in which the haptic system is not analogous to visual sampling via saccades and attention as inputs to visuo-spatial working memory. Therefore, sighted people must exercise caution when designing sensory substitution devices for the visually impaired population and virtual reality systems for haptic perception.

Notes

Acknowledgments

This work was supported by grants from the Japan Society for the Promotion of Sciences, Nissan Science Foundation, and Strategic Information and Communications R&D Promotion Programme Ministry of Internal Affairs and Communications of Japan. The authors thank Amelia Hunt and Patrick Cavanagh for their comments.

Supplementary material

ESM 1

(MOV 1577 kb)

ESM 2

(MOV 2548 kb)

References

  1. Baddeley, A. D., Lewis, V., & Vallar, G. (1984). Exploring the articulatory loop. The Quarterly Journal of Experimental Psychology, 36A, 233–252.CrossRefGoogle Scholar
  2. Balakrishnan, J. D., Klatzky, R. L., Loomis, J. M., & Lederman, S. J. (1989). Length distortion of temporally extended visual displays: Similarity to haptic spatial perception. Perception & Psychophysics, 46, 387–394.CrossRefGoogle Scholar
  3. Bliss, I., & Hämäläainen, H. (2005). Different working memory capacity in normal young adults for visual and tactile letter recognition task. Scandinavian Journal of Psychology, 46, 247–251. doi: 10.1111/j.1467-9450.2005.00454.x CrossRefPubMedGoogle Scholar
  4. Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. The Behavioral and Brain Sciences, 24, 87–114. doi: 10.1017/S0140525X01003922 CrossRefPubMedGoogle Scholar
  5. Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433–458. doi: 10.1037/0033-295X.96.3.433 CrossRefPubMedGoogle Scholar
  6. Gallace, A., & Spence, C. (2009). The cognitive and neural correlates of tactile memory. Psychological Bulletin, 135, 380–406.CrossRefPubMedGoogle Scholar
  7. Gallace, A., Tan, H. Z., & Spence, C. (2006). The failure to detect tactile change: A tactile analogue of visual change blindness. Psychonomic Bulletin and Review, 13, 300–303.CrossRefPubMedGoogle Scholar
  8. Gibson, J. J. (1966). The senses considered as perceptual systems. Boston, MA: Houghton Mifflin.Google Scholar
  9. Gilson, E. Q., & Baddeley, A. D. (1969). Tactile short-term memory. The Quarterly Journal of Experimental Psychology, 21, 180–184.CrossRefPubMedGoogle Scholar
  10. Heller, M. A. (1989). Picture and pattern perception in the sighted and the blind: The advantage of the late blind. Perception, 18, 379–389.CrossRefPubMedGoogle Scholar
  11. Henderson, J. M., & Hollingworth, A. (2003). Eye movements and visual memory: Detecting changes to saccade targets in scenes. Perception & Psychophysics, 65, 58–71.CrossRefGoogle Scholar
  12. Hill, J. W., & Bliss, J. C. (1968). Modeling a tactile sensory register. Perception & Psychophysics, 4, 91–101.CrossRefGoogle Scholar
  13. Hochberg, J. (1986). Representation of motion and space in video and cinematic displays. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance volume 1: Sensory processes and perception (Chapter 22). New York, NY: Wiley.Google Scholar
  14. Irwin, D. E. (1991). Information integration across saccadic eye movements. Cognitive Psychology, 23, 420–456. doi: 10.1016/0010-0285(91)90015-G CrossRefPubMedGoogle Scholar
  15. Irwin, D. E. (1992a). Perceiving an integrated visual world. In T. Inui & J. L. McVlelland (Eds.), Attention and performance XIV: Information integration in perception and communication (pp. 121–142). Cambridge, MA: MIT Press.Google Scholar
  16. Irwin, D. E. (1992b). Memory for position and identity across eye movements. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 307–317.Google Scholar
  17. Irwin, D. E., & Gordon, R. D. (1998). Eye movements, attention, and transsaccadic memory. Visual Cognition, 5, 127–155.CrossRefGoogle Scholar
  18. Kennedy, J. M. (2000). Recognizing outline pictures via touch alignment theory. In M. A. Heller (Ed.), Touch, representation and blindness (pp. 67–98). Oxford: Oxford University Press.CrossRefGoogle Scholar
  19. Kennedy, J. M., & Fox, N. (1977). Pictures to see and pictures to touch. In D. Perkins & B. Leondar (Eds.), The arts and cognition (pp. 118–135). Baltimore, MD: Johns Hopkins University Press.Google Scholar
  20. Klatzky, R. L., & Lederman, S. J. (1987). The intelligent hand. In G. H. Bower (Ed.), The psychology of learning and motivation (Vol. 21, pp. 121–151). San Diego, CA: Academic Press, Inc.Google Scholar
  21. Lakatos, S., & Marks, L. E. (1999). Haptic form perception: Relative salience of local and global features. Perception & Psychophysics, 61, 895–908.CrossRefGoogle Scholar
  22. Lederman, S. J., & Klatzky, R. L. (1997). Relative availability of surface and object properties during early haptic processing. Journal of Experimental Psychology: Human Perception and Performance, 23, 1680–1707.PubMedGoogle Scholar
  23. Lederman, S. J., Klatzky, R. L., Chataway, C., & Summers, C. D. (1990). Visual mediation and the haptic recognition of two-dimensional pictures of common objects. Perception & Psychophysics, 47, 54–64.CrossRefGoogle Scholar
  24. Loomis, J. M. (1981a). Tactile pattern perception. Perception, 10, 5–27.CrossRefPubMedGoogle Scholar
  25. Loomis, J. M. (1981b). On the tangibility of letter and Braille. Perception & Psychophysics, 29, 37–46.CrossRefGoogle Scholar
  26. Loomis, J. M. (1982). Analysis of tactile and visual confusion matrices. Perception & Psychophysics, 31, 41–52.CrossRefGoogle Scholar
  27. Loomis, J. M. (1990). A model of character recognition and legibility. Journal of Experimental Psychology: Human Perception and Performance, 16, 106–120.PubMedGoogle Scholar
  28. Loomis, J. M., Klatzky, R. L., & Lederman, S. J. (1991). Similarity of tactual and visual picture recognition with limited field of view. Perception, 20, 167–177. doi: 10.1068/p200167 CrossRefPubMedGoogle Scholar
  29. Loomis, J. M., & Lederman, S. J. (1986). Tactual perception. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance volume II: Cognitive process and performance. (Chapter 31). New York, NY: John Wiley & Sons.Google Scholar
  30. Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390, 279–281. doi: 10.1038/36846 CrossRefPubMedGoogle Scholar
  31. Magee, L. E., & Kennedy, J. M. (1980). Exploring pictures tactually. Nature, 278, 287–288. doi: 10.1038/283287a0 CrossRefGoogle Scholar
  32. Mahrer, P., & Miles, C. (2002). Recognition memory for tactile sequences. Memory, 10, 7–20.CrossRefPubMedGoogle Scholar
  33. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 101, 343–352.CrossRefGoogle Scholar
  34. Miller, G. A. (1994). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 101, 343–352. doi: 10.1037/0033-295X.101.2.343
  35. O’Regan, J. K. (1992). Solving the “real” mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology, 46, 461–488.CrossRefPubMedGoogle Scholar
  36. Rensink, R. A. (2000). Visual search for change: A probe into the nature of attentional processing. Visual Cognition, 7, 345–376.CrossRefGoogle Scholar
  37. Richardson, B. L., & Wullemin, D. B. (1981). Can passive tactile perception be better than active? Nature, 292, 90. doi: 10.1038/292090a0 CrossRefPubMedGoogle Scholar
  38. Saiki, J. (2003). Spatiotemporal characteristics of dynamic feature binding in visual working memory. Vision Research, 43, 2107–2123. doi: 10.1016/S0042-6989(03)00331-6 CrossRefPubMedGoogle Scholar
  39. Simons, D. J., & Rensink, R. (2005). Change blindness: Past, present, and future. Trends in Cognitive Sciences, 9, 16–20. doi: 10.1016/j.tics.2004.11.006 CrossRefPubMedGoogle Scholar
  40. Treisman, A. M., & Gelade, G. A. (1980). Feature-integration theory of attention. Cognitive Psychology, 12, 97–136. doi: 10.1016/0010-0285(80)90005-5 CrossRefPubMedGoogle Scholar
  41. Wolfe, J. M. (1999). Inattentional amnesia. In V. Cohtheart (Ed.), Fleeting memories: Cognition of brief visual stimuli (pp. 71–94). Cambridge, MA: MIT Press.Google Scholar

Copyright information

© The Psychonomic Society, Inc. 2015

Authors and Affiliations

  • Takako Yoshida
    • 1
    Email author
  • Ayumi Yamaguchi
    • 2
  • Hideomi Tsutsui
    • 2
  • Tenji Wake
    • 3
  1. 1.Department of Mechanical Sciences and EngineeringTokyo Institute of TechnologyMeguroJapan
  2. 2.Department of PsychologyChukyo UniversityNagoyaJapan
  3. 3.Institute of Visual ScienceKanagawa UniversityYokohamaJapan

Personalised recommendations