Abstract
In this study, we validated a plausibility of a hypothesis that in the human brain an internal simulation of grasping contributes to tool recognition. Such an internal simulation must be performed by utilizing internal models of the human hand. An internal model corresponding to a geometrically transformed hand shape was retrained by an experimental paradigm we built. The retrained internal model of the dominant hand affected cognitive judgments of object size of tools used by the dominant hand and however did not influence these of tools used by the non-dominant hand. While, those results in the training condition of the non-dominant hand showed the reverse tendency of the former results. The above results indicate the plausibility of the hypothesis.
Similar content being viewed by others
Keywords
1 Introduction
When you feel a thirst, you can immediately find a drinking cup even in the complicated environment, although there are many kinds of cups. Thus, an object concept of a tool should be maintained by universally representing its tool in the brain. Traditionally, it has been considered that the object concept is expressed based on symbols of declarative memories, visual features and so on. However, it is recently supported that the object concept is also related to sensorimotor experiences of tool use (e.g., [1, 2]). Even if you are verbally explained about an unknown tool, you can not really understand its tool. You are able to really know about its tool by actually repeating tool use.
While, it is biologically ascertained that internal models of the human body are represented in the brain. For example, an inverse dynamics model of a monkey’s eye is represented in the cerebellum [3] and an internal model of the human arm is acquired in the cerebellum [4]. The presence of an internal simulation of action has been also pointed out, because the motion-related cortical areas partly activated even in motion imagery (e.g., [5, 6]). A patient that the parietal cortex was partly damaged became impossible to take into account physical constraints in motion imagery of hand movements [6]. In an observation task, moreover, although the motion-related cortical areas partly activate when viewing graspable tools, the activations in these areas were smaller than the above case when viewing objects of the other categories [7]. From this point of view, we have proposed a hypothesis that in the human brain an internal simulation of grasping a tool contributes to tool recognition: judging whether we can grasp a target object is useful to recognize as its tool. Such an internal simulation can be realized by utilizing internal models of the human hand. In order to validate the plausibility of the hypothesis, by using an experimental paradigm to retrain the internal mode of the transformed hand shape, we investigated a relationship between the trained internal model and cognitive judgments of tool size [8].
2 Methods
Fifteen participants joined in the first experiment (right-handed, aged 18–24) and fifteen participants joined in the second experiment (right-handed, aged 18–22). Each participant was tested by the Edinburgh Handedness Inventory. They were completely naive with regard to the specific purpose and jointed this experiment after signing an informed consent agreement.
An experimental system has built in a dark room and constructed by a finger motion measurement device (CyberGlove, Right and left hand types, CyberGlove Systems Inc.), two three dimensional motion measurement devices (FASTRAK, Polhemus; OPTOTRAK3020, Northern Digital Inc.), a mirror, a display (XL2720T, BenQ Inc.), an experimental chair using an ergonomically designed car seat (RECARO GmbH & Co. KG.), and a chin rest to fix the head (see Fig. 1). Finger lengths between joints of the hand were measured and a hand and forearm shaped using the measured lengths were displayed on a monitor. Participants were able to see the screen with the mirror placed in front of them. The position, size, orientation and hand shape of the displayed hand were adjusted to look like their own hand. Infrared light-emitting diodes (IR LED markers) were attached to each fingertip in order to accurately measure the fingertip positions by OPTOTRAK with a sampling frequency of 200 Hz. Joint angles of the right and left hands of each participant were measured by two CyberGloves, respectively, and the hand position and orientation ware measured by FASTRAK with a sampling frequency of 60 Hz. The displayed hand and forearm could be moved in synchronization with the participant’s hand movement.
A basic idea in an experimental paradigm we have built is to examine a relationship between cognitive process for graspable tools and a retrained internal model of the human hand. The shape of the participant’s hand was geometrically transformed and the hand was displayed on a monitor. In the transformed hand, the length between CM and MP joints of the thumb and the lengths between MP and PIP joints of the other fingers were lengthened to 1.8 times, respectively (see Fig. 2).
Before the experiments, in order for a participant to feel the displayed hand like one’s hand, synchronous tactile stimuli were simultaneously given to both the participant’s hand and the displayed hand for a few minutes. Then, two small circles and the normal hand were displayed on the monitor and a participant repeatedly executed the finger movement task that put the fingertips of the thumb and the index finger to the small circles, respectively, as shown in Fig. 3. The positions of the circles were randomly changed in each trial and they repeated 20 trials as a set. After the trials, only the two circles were displayed and they performed the same task 20 times under the condition that the hand was not displayed. The errors were detected from the differences between the measured grip apertures and the premeasured correct widths. The training were continued until the average of the errors became less than 8 mm. When the training did not finish, the training finished after 5 sets. In the training, they trained also with the contralateral hand alternately. After the training, we examined two kinds of measurements described below. After a break for a few minutes, two small circles and the geometrically transformed hand were displayed, the same participant repeatedly executed the task, and the errors of the grip apertures were detected in the same way. In the training, they trained also with the contralateral untransformed hand alternately. After the training, the same measurements were executed. In order to maintain the learning effect, the training of a set was executed between the measurements. The right-hand was geometrically transformed in the first experiment and The left-hand was geometrically transformed in the second experiment.
Before the experiments, participants answered a questionnaire about images of thirty kinds of tools (e.g., Fig. 5): the hand (right and/or left) that uses each tool, frequency in tool use, grip type and so on. Base on the results, two tools were selected from the tools that the direction of grip was given, and moreover two tools were selected from the other tools in each participant. An image was randomly selected from within different ten image sizes of its selected tool and the image was displayed as shown in Fig. 4. Participants were instructed to answer whether they recognized as its tool with regard to object size: If they felt too small or large as its tool, they answered “No”. Because 10 trials in each image size were executed, the number of the measurements was 100 trials per each tool (10\(\,\times \,\)10 trials). Moreover, as another measurement, participants were instructed to answer a verbal estimation of the apparent size of a displayed tool image, using a 10-point scale in which 1 corresponded to the size of a 1 yen coin and 10 corresponded to the size of a compact disk (CD) (see [9]). The image display was the same as the above measurement. The number of the measurements was 10 trials per each tool.
3 Results
The results of cognitive judgment for tool size were shown in Figs. 6 and 7. The lower and upper thresholds were detected from the two intersection points that the probability become 0.5, by interpolating between the data points by a sigmoid function. It is shown that the object size between the lower and upper thresholds is recognized as its tool. In a right handle of the transformed right-hand condition, a larger object becomes to be recognized as its tool, as shown in Fig. 6(b), although the result in the case of a left handle shown in Fig. 6(a) does not change. While, in the case of the transformed left-hand condition shown in Fig. 7, these results show the reverse tendency of the above results. Here, in order to examine the cognitive tool sizes for all tools, the change rate of these thresholds was calculated as follows:
where CTH1 and CTH2 express the centers between the lower and upper thresholds in the normal hand condition and the transformed hand condition, respectively. The average values of the change rates were shown in Figs. 8 and 9. In the transformed right-hand condition of Fig. 8, the change rates of TLR and TLL are significantly different (\(p < 0.05\)) and in the transformed left-hand condition the change rates of TLR and TLL are also significantly different (\(p < 0.05\)). Moreover, both the change rates of TLR of the transformed left-hand condition and TLL of the transformed right-hand condition are not significantly different from the zero value (one sample t-test, \(p > 0.05\)) and the other change rates are significantly different from the zero value (one sample t-test, \(p < 0.05\)). All the change rates in both the transformed right-hand and left-hand conditions were divided into two categories of the tools used by the transformed hand and the other tools used by the opposite hand of the transformed hand, and moreover each category was divided into TL1 and TL2. Figure 9 shows each average value of the change rates of the four cases. Four combinations of the change rates between the two categories are significantly different (Tukey-Kramer method, \(p < 0.05\)). The change rates of the tools used by the transformed hand are significantly different from the zero value (one sample t-test, \(p < 0.05\)), although these of the other tools used by the opposite hand of the transformed hand are not significantly different from the zero value (one sample t-test, \(p > 0.05\)). Here, although these results depend on the difference of the object sizes of the tool categories, the object sizes are not significantly different (\(p > 0.05\)).
Figure 10 shows the normalized apparent sizes calculated as follows:
Here, NAS\(\mathrm{_{i}}\) and AS\(\mathrm{_{i}}\) stand for an ith normalized apparent size and an \(\mathrm i\)th apparent size, respectively, and AS\(\mathrm{_{max}}\) and AS\(\mathrm{_{min}}\) express the maximum and the minimum values of the apparent sizes measured with each tool, respectively. It is noted that AS\(\mathrm{_{max}}\) and AS\(\mathrm{_{min}}\) in the transformed hand condition were the values of the same tool as the normal hand condition. The difference of two linear regression lines in the normal and transformed hand conditions, were statistically tested by an analysis of covariance. The difference of the slopes is not significant in both the cases of Figs. 10(a) and (b) (\(p > 0.05\)), and also the difference of the average values is not significant in both the cases of these fugues (\(p > 0.05\)).
4 Discussion
In the transformed hand condition, although participants were unable to skillfully execute the finger movement task before the training, they gradually became possible to accurately perform the task without viewing the displayed hand. Here, note that human brain should acquire an internal model to relate fingertip positions and joint angles through the training. In the basis of the hypothesis, the internal model of the right hand should be utilized in an internal simulation of grasping tools used by the right hand and it is not used in case of tools used by the left hand. Thus, the hypothesis predicts that the retrained internal model of the right hand influences the cognitive judgments of tools used by the right hand and it does not affect the cognitive judgments of tools used by the left hand. Moreover, because the displayed hand was geometrically transformed to become possible to grasp larger objects, the hypothesis also predicts that a larger object become to be recognized as its tool. Therefore, the above results of the cognitive judgment indicate the plausibility of the hypothesis.
While, there is a problem that our results of the cognitive judgments can be explained by not only the hypothesis but also the recently reported BBR effect (body-based rescaling effect). The BBR effect is to rescale the apparent size of objects by perceptual size of one’s body (e.g., [9, 10]). For example, when objects are magnified by a magnifying goggles, participants appear to shrink back to near-normal size when one’s hand (also magnified) is placed next to them [9]. In our experiments, there is a possibility that participants felt that one’s hand became large because the finger lengths were lengthened. As a result, the effect may cause the changes in the cognitive judgments of tool sizes. From this point of view, we investigated the apparent sizes of various tools in the same way as Linkenauger et al. [9]. If the changes in the cognitive judgment were caused by the BBR affect, the apparent sizes in the transformed hand condition should become smaller than those in the normal hand condition. As shown in Fig. 10, however, both the apparent sizes of the normal and transformed hand conditions are not different. Thus, the results of Fig. 10 show that the BBR effect did not arise under the transformed hand condition in our experimental paradigm. From the above considerations, we demonstrated the plausibility of the hypothesis that an internal model of the human hand contributes to tool recognition.
References
Katayama, M., Kawato, M.: A neural network model integrating visual information, somatosensory information and motor command. J. Robot. Soc. Japan 8, 757–765 (1990). in Japanese
Borghi, A.M.: Object concepts and action. In: Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thinking, pp. 2–34. Cambridge University Press, Cambridge (2005)
Shidara, M., Kawano, K., Gomi, H., Kawato, M.: Inverse-dynamics model eye movement control by purkinje cells in the cerebellum. Nature 365, 50–52 (1993)
Imamizu, H., Miyauchi, S., Tamada, T., Sasaki, Y., Takino, R., Putz, B., Yoshioka, T., Kawato, M.: Human cerebellar activity reflecting an acquired internal model of a new tool. Nature 403(6766), 192–195 (2000)
Jeannerod, M.: The representing brain: neural correlates of motor intention and imagery. Behav. Brain Sci. 17, 187–245 (1994)
Sirigu, A., Duhamel, J.R., Cohen, L., Pillon, B., Dubois, B., Agid, Y.: The mental representation of hand movements after parietal cortex damage. Science 273(5281), 1564–1568 (1996)
Chao, L.L., Martin, A.: Representation of manipulable man-made objects in the dorsal stream. NeuroImage 12, 478–484 (2000)
Katayama, M., Kurisu, T.: Human object recognition based on internal models of the human hand. In: Yamaguchi, Y. (ed.) Advances in Cognitive Neurodynamics (III), pp. 591–598. Springer, Heidelberg (2013)
Linkenauger, S.A., Ramenzoni, V., Proffitt, D.: Illusory shrinkage and growth: bady based scaling affects the perception of size. Psychol. Sci. 21(9), 1318–1325 (2010)
van der Hoort, B., Guterstam, A., Ehrsson, H.H.: Being barbie: the size of one’\(s\) own body determines the perceived size of the world. PLoS ONE 6(5), 1–10 (2011)
Acknowledgments
This research was partially supported by MEXT KAKENHI (C) No. 15K00200.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Katayama, M., Akimaru, Y. (2016). An Internal Model of the Human Hand Affects Recognition of Graspable Tools. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9950. Springer, Cham. https://doi.org/10.1007/978-3-319-46681-1_24
Download citation
DOI: https://doi.org/10.1007/978-3-319-46681-1_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46680-4
Online ISBN: 978-3-319-46681-1
eBook Packages: Computer ScienceComputer Science (R0)