Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.
This research was supported by a PhD stipend from the Max Planck Society; by the WCU (World Class University) program through the National Research Foundation (NRF) of Korea, funded by the Ministry of Education, Science and Technology (Grant No. R31-2008-000-10008-0); by the Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Science, ICT and Future Planning (Grant No. NRF-2013R1A1A1011768); and by the Brain Korea 21 PLUS Program through the National Research Foundation of Korea, funded by the Ministry of Education.
Bornstein, M. H. (1987). Perceptual categories in vision and audition. In S. Harnad (Ed.), Categorical perception: The groundwork of cognition (pp. 287–300). New York, NY: Cambridge University Press.Google Scholar
Bushnell, E. W., & Baxt, C. (1999). Children’s haptic and cross-modal recognition with familiar and unfamiliar objects. Journal of Experimental Psychology: Human Perception and Performance, 25, 1867–1881. doi:10.1037/0096-1522.214.171.1247PubMedGoogle Scholar
Gaissert, N., & Wallraven, C. (2011). Categorizing natural objects—A comparison of the visual and haptic modalities. Experimental Brain Research, 216, 123–134.PubMedCrossRefGoogle Scholar
Gaissert, N., Wallraven, C., & Bülthoff, H. H. (2010). Visual and haptic perceptual spaces show high similarity in humans. Journal of Vision, 10(11), 1–20. doi:10.1167/10.11.2CrossRefGoogle Scholar
Harnad, S. (1987). Psychophysical and cognitive aspects of categorical perception: A critical overview. In S. Harnad (Ed.), Categorical perception: The groundwork of cognition (pp. 1–28). New York, NY: Cambridge University Press.Google Scholar
Norman, J. F., Norman, H. F., Clayton, A. M., Lianekhammy, J., & Zielke, G. (2004). The visual and haptic perception of natural object shape. Perception & Psychophysics, 66, 342–351. doi:10.3758/BF03194883CrossRefGoogle Scholar
Pastore, R. E. (1987). Categorical perception: Some psychophysical models. In S. Harnad (Ed.), Categorical perception: The groundwork of cognition (pp. 29–52). New York, NY: Cambridge University Press.Google Scholar
Reales, J. M., & Ballesteros, S. (1999). Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 644–663. doi:10.1037/0278-73126.96.36.1994Google Scholar
Yildirim, I., & Jacobs, R. A. (2013). Transfer of object category knowledge across visual and haptic modalities: Experimental and computational studies. Cognition, 126, 135–148.PubMedCrossRefGoogle Scholar