A simple technique to study embodied language processes: the grip force sensor
Research in cognitive neuroscience has shown that brain structures serving perceptual, emotional, and motor processes are also recruited during the understanding of language when it refers to emotion, perception, and action. However, the exact linguistic and extralinguistic conditions under which such language-induced activity in modality-specific cortex is triggered are not yet well understood. The purpose of this study is to introduce a simple experimental technique that allows for the online measure of language-induced activity in motor structures of the brain. This technique consists in the use of a grip force sensor that captures subtle grip force variations while participants listen to words and sentences. Since grip force reflects activity in motor brain structures, the continuous monitoring of force fluctuations provides a fine-grained estimation of motor activity across time. In other terms, this method allows for both localization of the source of language-induced activity to motor brain structures and high temporal resolution of the recorded data. To facilitate comparison of the data to be collected with this tool, we present two experiments that describe in detail the technical setup, the nature of the recorded data, and the analyses (including justification about the data filtering and artifact rejection) that we applied. We also discuss how the tool could be used in other domains of behavioral research.
KeywordsGrip-force sensor Embodiment Language Motor system
In a landmark demonstration, Hauk, Johnsrude, and Pulvermüller (2004) showed that the passive processing of isolated words denoting motor actions activates brain structures involved in the planning and execution of those voluntary movements. Similar observations in other modalities (e.g., words denoting color activate the fusiform gyrus just anterior to color-selective regions of extrastriate visual cortex, and words denoting odor activate olfactory areas in the prepiriform cortex; see Binder & Desai, 2011, for a review) have had a tremendous impact on our understanding of the neural basis of lexical meaning. As a matter of fact, the activation of modality-specific brain structures by language was perceived as evidence that these structures participate in the elaboration of word semantics. However, as experimentations became more sophisticated and words were presented within sentences, the picture became more complex. For example, language-induced activity in modality-specific brain structures varies with the linguistic and extralinguistic context (e.g., Willems & Casasanto, 2011). Hence, whereas the word “to push” in a sentence such as “Now I push the button” triggers activity in the brain’s motor structures, little evidence for such activity is obtained for the same action word when it is presented in a negative context (e.g., “Now I do not push the button”; e.g., Tettamanti et al., 2008; see also Aravena et al., 2012). Moody and Gennari (2010) also showed that the effort implied by a given action word modulates activity in premotor cortex. Hence, the word “to push” triggers more activity in motor structures when it is presented in a sentence such as “The delivery man is pushing the piano,” rather than a sentence such as “The delivery man is pushing the chair.” Similarly, van Elk, van Schie, Zwaan, and Bekkering (2010) reported differential neural responses for the word “swim” when the action was performed by an animal or by a human (“The duck swims in the pond” vs. “The woman swims in the pond”). The flexibility with which modality-specific brain regions are recruited by language remains consistent with the view that cerebral motor structures contribute to the elaboration of meaning. Nonetheless, such flexibility suggests that this recruitment might not be a function of the (literal) word meaning, but rather of the (implied) speaker meaning. In other terms, the recruitment of these structures hinges on principles of inference (cf. pragmatics) and depends on the utterance context and preexisting knowledge. In order to understand the role of modality-specific brain structures in meaning construction, the specification of conditions under which these structures are recruited by language becomes necessary. For such a purpose, experimental tools that allow the rapid and economical testing of hypotheses would be of major help. Here, we describe in detail a simple experimental tool—a grip force sensor—to capture online the involvement of motor structures in the processing of language that describes motor actions (Aravena et al., 2014; Aravena et al., 2012; Frak, Nazir, Goyette, Cohen, and Jeannerod 2010).
Our ability to hold and lift objects depends critically on successful predictive and reactive control of the grip forces required to prevent the slippage of these objects through our finger tips (Delevoye-Turrell & Wing, 2005). We must sense various object characteristics, such as their weight, surface structure, and shape. This information is transferred from primary, premotor, supplementary, and cingulate cortical motor areas via spinal motor neurons into the finger muscles (e.g., Dum & Strick, 1991; Lemon, 1993). Kuhtz-Buschbeck, Ehrsson, and Forssberg (2001) used MRI to show that the automatic adjustments of grip force during normal holding of an object are controlled via the (contralateral) primary sensorimotor cortex and the (contralateral) intraparietal cortex. When participants intentionally increase their grip force to hold the object more firmly, dorsal and ventral premotor cortices are also involved. The continuous control of grip force is thus well understood in terms of its neurophysiological regulation, and the usefulness of grip force assessment in clinical diagnosis and pediatrics has long been established (e.g., Delevoye-Turrell, Giersch, & Danion, 2003; Nowak & Hermsdörfer, 2006; Rauch et al., 2002). By monitoring grip force in healthy individuals, our team recently revealed that subtle but selective grip force variations could be seen during the processing of language that refers to motor actions (Aravena et al., 2014; Aravena et al., 2012; Frak et al., 2010). In these studies, participants listened to individual spoken words or sentences while holding a grip force sensor that continuously monitored grip force variations throughout the experiment. Their task was simply to count the occurrence of a predefined target within the sentence—for example, the number of times the name of a country was mentioned within the verbal material. These studies revealed that when the sentence contained an action word—but not otherwise—a significant enhancement of the grip force level was observed starting within the first 300 ms after action-word onset. Modulations of the sentential context further specified that an increase in grip force to action words hinged on the relevance of the action within the verbally described situation. More specifically, grip force to action words increased when the word was presented within an affirmative sentential context (“Fiona signs the contract”), but not when it was presented within either a negative context (“Fiona does not sign the contract”) or a volitional context (“Fiona wants to sign the contract”) (Aravena et al., 2014; Aravena et al., 2012).
The work by Frak et al. (2010) and Aravena et al. (2014; Aravena et al., 2012) pioneered the use of the grip force sensor for studies on language processing. However, because of yet missing standards, the way data were recorded and analyzed changed from one study to another. Today, our experience with this tool has accumulated, and we here aim to suggest criteria for the use of the sensor in order to facilitate the comparison of data collected with this tool. In the following text, we thus present two experiments that describe in detail the technical setup, as well as the nature of the recorded data and the analyses that were applied. One important observation that we made over past experiments is that without an explicit instruction about the initial strength of grip force that participants should apply on the sensor, some participants tend to hold the sensor too loosely. The data from these participants tend to remain close to zero, independently of experimental conditions. In the present study, we thus explicitly instructed participants to voluntarily apply a constant force on the sensor. Since psycholinguistic studies typically span dozens of minutes, in the first experiment we simply monitored the fluctuations in grip force level over time when the participant was not involved in any specific task. This experiment allowed for the determination of when pauses should be introduced into an experimental design in order to limit motor and/or cognitive fatigue. The second experiment reports the specific effects observed on the variations in grip force levels when processing language materials related to motor actions. This experiment serves to explain the different criteria that we chose for data filtering and artifact rejection.
All participants in this study gave their informed written consent. The study was approved by the ethics committee, Comité de Protection des Personnes Sud-Est II, in Lyon, France (IRB 11263).
A total of 16 participants with no reported history of psychiatric or neurological disorders participated in the experiment. They were all right-handed (i.e., they used their right hand for at least seven of the ten actions described in the Edinburgh Handedness Inventory; Oldfield, 1971) and had no prior knowledge about the scientific aim of the study.
Equipment and data acquisition
Two distinct computers were used for data recording and stimulus presentation, in order to ensure synchronization between the audio files and grip force measurements (estimated error < 5 ms). These error estimations were computed by comparing the time interval between the triggers that were received before and after reading a .wav file and the actual .wav file duration. Note that these error variations are very small even if they are associated with the sum of the delays between the sound card iterations and other real-time processes that occasionally occur on Windows operating systems.
Participants were comfortably seated at a desk. They were asked to rest their right arm on the tabletop and to hold the load cell between the thumb, index, and middle fingers of their right hand. The hand posture was controlled so as to have the wrist slightly tilted upward, to avoid resting the load cell on the tabletop. The experimenter demonstrated the desired arm and hand positions, and participants were asked to hold the cell with a constant grip force of 1.5 N. To achieve this, at the start of the experiment the experimenter instructed participants to increase their grip force until the level of 1.5 N was reached. Participants were then requested to maintain this grip force for a total of 4 min, during which no further feedback was given. Participants closed their eyes throughout the duration of the experiment.
Only the compression force, Fz, was included in the analysis, because this parameter is the most accurate indicator of prehensile grip force (Frak et al., 2010). It captures the force vector perpendicular to the contacted surfaces of the sensor.
Results and discussion
In the absence of slippage, grip force is controlled via proprioception solely when participants are required to close their eyes. Under such conditions, a systematic reduction in grip force levels was observed over the course of the 4-min trial. Similar results have been reported in other domains of postural control. They most likely reflect the gradual loss of propriceptive sensitivity. For example, Wann and Ibrahim (1992) found systematic drift of perceived limb position toward the body during visual occlusion unless occasional glimpses of the arm were provided. The authors therefore concluded that visual updating is important to prevent proprioceptive drift (PD). Nevertheless, Wolpert, Goodbody, and Husain (1998) and Desmurget, Vindras, Gréa, Viviani, and Grafton (2000) found only limited PD even in the absence of visual information in situations in which an object was held in a precision grip for intervals of under a minute.
In the present study, we will show that grip force gradually drifts over extended periods of time. Despite the fact that the average grip force level at the beginning of the experiment is set at 1.5 N, our data showed a large variation between participants’ performance right from the start of the experiment. This may be explained by the fact that the preset 1.5-N force level was above the slip-ratio limit, which can be estimated to be close to 27 g (i.e., 27 mN) when holding an object of 68 g (Turrell, Li, & Wing, 2001). Hence, even if participants lowered their grip force level during the trial, the level was never low enough to risk object slippage. To note, all participants revealed a clear drop in grip force level over the 4 min, suggesting that frequent breaks followed by a recalibration of the grip force should be considered. Most importantly, some participants (e.g., Participants 5 and 9) exhibited particularly strong cyclical modulations of grip force as compared to the others. Figure 2 allows a visualization of these different points while providing the means of identifying these participants who were prone to noisy data (i.e., voluntary grip force modulations), despite the absence of a particular task to perform. These participants could bias the results of language experiments, especially if the number of trials per condition was small. As we will see in Experiment 2, these participants can and should be discarded with an automatic artifact rejection procedure.
A total of 26 (15 females, 11 males) right-handed French students (mean age = 21.2 years old, SD = 2.27) with no known hearing problems or reported history of psychiatric or neurological disorders participated in the experiment. Sixteen of these participants had also participated in Experiment 1.
Examples of stimuli used in Experiment 2 and their approximate English translation
Type of Stimuli
Approximate English Translation
Jacques brosse ses dents.
Jacques brushes his teeth.
Control sentences with nonaction verbs
Lyse tente de compléter son devoir.
Lyse attempts to complete her homework.
Target sentences containing the name of a country
Caroline planifie une excursion en Italie.
Caroline plans an excursion in Italy.
The voice of a French female adult was recorded while she read the sentences, using Adobe Soundbooth. The mean word durations were 330 ms (SD = 8.67 ms) for the action verbs and 408 ms (SD = 7.55 ms) for the nonaction verbs. There was a pause of 2,000 ms between the presentations of the sentences.
Equipment and data acquisition
These were identical to those aspects of Experiment 1.
Participants wore headphones and were comfortably seated at a desk. They were asked to rest their right arm on the tabletop. Participants were required to hold the load cell as described in Experiment 1 and were guided through feedback from the experimenter to increase grip force until it reached the level of 1.5 N. Participants had their eyes closed throughout the duration of the experiment. They were instructed to carefully listen to the sentences and to silently count the total number of sentences that contained the name of a country. To avoid muscular fatigue, participants were given a break after every 20 sentences (approximately every 80 s). During this break, they were requested to rest the cell on the table while they rotated and relaxed their hand as they pleased, until they were ready to continue. After the rest periods, the initial grip force calibration of 1.5 N was again applied. The entire experiment lasted approximately 7–9 min, depending on the length of time each participant took during the five breaks.
Prior to the data analysis, each signal component was low-pass filtered at 15 Hz with a fourth-order, zero-phase, low-pass Butterworth filter. The Fz signal was then segmented into 1,000-ms epochs, spanning from 200 ms prior to the target word onset to 800 ms after target word onset. A baseline correction was performed on the mean amplitude of the interval spanning from 200 to 0 ms prior to target word onset. The baseline correction was implemented because of a possible global change in grip force during the session, and because we were only interested in grip force changes. Thus, we adjusted the poststimulus values by the values present during the baseline period. A simple subtraction of the baseline values from all of the values in the epoch for each given trial was performed. Since the participants were asked to hold the grip force sensor throughout the experiment, a “negative” grip force will refer to a decrease in grip force with respect to the baseline (and not to the absence of grip force, which would imply dropping the sensor cell). Finally, an automatic artifact rejection was used to remove segments surpassing an amplitude range of ±200 mN with respect to the baseline and/or showing an amplitude change of more than 100 mN within a period of less than 100 ms, which is indicative of finger movements. Participants with an artifact rejection rate of more than 20 % per condition (i.e., more than 7/35 segments) were excluded from the analyses. The Fz signals for action words in the action-in-focus condition were averaged for each participant, as were those for nonaction words in the control condition. Justifications for these criteria are given in the “Results and discussion” section.
Following Aravena et al. (2014; Aravena et al., 2012), to evaluate the time course of language-induced motor activation, we drew on an influential neurophysiological model of spoken sentence comprehension by Friederici (2002). According to this model, information about syntactic structure is formed in a first phase on the basis of information about word category approximately 100–300 ms after word onset. In a second phase (300–500 ms), lexical–semantic and morphosyntactic processes are computed for thematic role assignments. In a third and final phase (500–1,000 ms), the information that was generated in Phases 1 and 2 is integrated and reanalyzed. Applying this model, for each word condition (action verbs and nonaction verbs), the averaged grip force values in the three time windows were compared with the proper baseline (i.e., the averaged grip force values over the segment between –200 and 0 ms before word onset). For the windows that presented significant grip force modulations with respect to that baseline, a comparison between the conditions was conducted using repeated measures analysis of variance (ANOVA).
Results and discussion
To illustrate the different steps of our data analyses, we will first present the data from two typical participants, one whose data are representative (Participant 19) and another whose data are too noisy to exploit correctly (Participant 5). Note that Participant 5 also had noisy data in Experiment 1, indicating the presence of idiosyncratic grip force signatures.
Amplitude: Trials in which the amplitude exceeded ±200 mN compared to baseline. The choice of this criterion was based on visual inspection of our data, and might serve as a general guideline when an initial grip force level of 1.5 N was applied. Rejecting these trials thus discarded outliers relative to the initial grip force, as well as identified participants that showed strong fluctuations in grip force levels.
Max–Min (x): Trials that contained a sudden change in force amplitude of more than 100 mN within an interval of 100 ms. This criterion discarded trials in which participants moved their fingers.
In line with our previous findings (Aravena et al., 2012; Aravena et al., 2014; Frak et al., 2010), the present results confirm that the processing of action words within a sentential context focused on body action provokes a sustained increase in grip force levels starting within the first 300-ms period after word onset—that is, within the time frames associated in the model proposed by Friederici (2002) with the retrieval of word form and word category. Note that the rather early onset of these effects is in line with studies in which event-related potentials have been analyzed during spoken sentence processing. These studies have shown that listeners rapidly relate the acoustic signal to the semantic context before they even know what the unfolding word is going to be (for a review, see Hagoort, 2008). In addition, the present work has identified important sources of artifacts and clarified how to deal with them in future work.
The two experiments presented here serve to demonstrate how fine-grained analyses of grip force fluctuations can reveal language-induced activity originating from the motor structures of the brain. When variables related to gradual force drifts (proprioceptive drift—see Exp. 1) and noise (due to abrupt finger movements or psychological factors such as fatigue—see Exps. 1 and 2) are controlled for, the measures obtained from a grip force sensor reveal an increase in muscle activity within a few hundred milliseconds after the onset of an action word. Note that, given the conduction time of approximately 18–20 ms between the primary motor cortex (M1) and hand muscles (estimations using transcranial magnetic stimulation; Rossini, Rossi, Pasqualetti, & Tecchio, 1999), activity in cortical motor structures occurs slightly earlier than that measured on the sensor. Thus, simply subtracting this value from the onset of significant force modulations provides a good estimation of the onset of brain activity in these motor structures. The majority of corticospinal projections originate from M1, although dorsal and ventral premotor areas, cingulate motor areas, and supplementary motor area contribute to the corticospinal projection, as well (see Maier et al., 2002). Brain imaging is thus necessary for determining where within those motor structures the observed language effects occur.
In conclusion, through the fine-grained analysis of grip force variations during language processing, it is possible to obtain a valid and sensitive method to rapidly test hypotheses about the potential involvement of motor brain structures in language processing. Subsequent verification with brain-imaging methods could help confirm these assumptions. A clear limitation of the present method is that testing is currently limited to action words, whereas research on embodied language processing focuses on language-induced activity in all modality-specific brain structures. Note, however, that by implementing a strip-force transducer, we could easily measure the force output modulations of other body parts (leg, mouth, or body). Similarly, although until now we have used the device to investigate the perception of language understanding, it could equally well serve studies on language production.
The grip force paradigm can also help us gain a better understanding of the properties of motor simulation—that is, the cognitive processes that are involved in (1) the preparation of voluntary actions and (2) the anticipation of the effects of an action, as well as (3) the understanding of the intention of an action performed by others. Motor simulation studies have used subjective verbal reports or brain-imaging techniques because an efficient behavioral approach was lacking. We suggest that our passive grip force paradigm could serve as an objective tool to study the dynamics of motor simulation across many fields of cognitive sciences. In line with this suggestion, we recently used our paradigm to reveal the motor simulation processes that take place while watching videos of action scenes in an odorant-augmented environment (Blampain & Delevoye-Turrell, 2015). Finally, because the passive grip force paradigm requires no specific body position or movement, it may provide great potential for investigating the neural correlates of motor simulation using electroencephalography and Nirs technologies (see, e.g., Krause, Lindemann, Toni, & Bekkering, 2014), and it could also serve studies on motor system pathologies.
The motor system is the only way the human brain can communicate. We thus proposed here that the use of grip force sensors can be a powerful tool to gain a better understanding of how the motor system intervenes for embodied social communication, in both verbal and gestural contexts.
Note the presence of 1/f fluctuations in the data, which could bear complementary information not exploited currently.
This research was supported by the French National Center for Scientific Research (CNRS).
- Aravena, P., Delevoye-Turrell, Y., Deprez, V., Cheylus, A., Paulignan, Y., Frak, V., & Nazir, T. A. (2012). Grip force reveals the context sensitivity of language-induced motor activity during “action words” processing: Evidence from sentential negation. PLoS ONE, 7, e50287. doi: 10.1371/journal.pone.0050287 CrossRefPubMedPubMedCentralGoogle Scholar
- Blampain, J., & Delevoye-Turrell, Y. (2015). Effect of action scenes on muscular contraction: Evidence provided by a grip-force sensor. Poster presented at the 14th European Congress of Sport Psychology, Bern, Switzerland.Google Scholar
- Delevoye-Turrell, Y., & Wing, A. (2005). Action and motor skills: Adaptive behaviour for intended goals. In K. Lamberts & R. Goldstone (Eds.), Handbook of cognition (pp. 130–160). London: Sage. Retrieved from http://knowledge.sagepub.com/view/hdbk_cognition/SAGE.xml CrossRefGoogle Scholar
- Krause, F., Lindemann, O., Toni, I., & Bekkering, H. (2014). Different brains process numbers differently: Structural bases of individual differences in spatial and nonspatial number representations. Journal of Cognitive Neuroscience, 26, 768–776. doi: 10.1162/jocn_a_00518 CrossRefPubMedGoogle Scholar
- Maier, M. A., Armand, J., Kirkwood, P. A., Yang, H. W., Davis, J. N., & Lemon, R. N. (2002). Differences in the corticospinal projection from primary motor cortex and supplementary motor area to macaque upper limb motoneurons: An anatomical and electrophysiological study. Cerebral Cortex, 12, 281–296.CrossRefPubMedGoogle Scholar
- Rauch, F., Neu, C. M., Wassmer, G., Beck, B., Rieger-Wettengl, G., Rietschel, E., … Schoenau, E. (2002). Muscle analysis by measurement of maximal isometric grip force: New reference data and clinical applications in pediatrics. Pediatric Research, 51, 505–510.Google Scholar