Introduction

Touch is a rich medium of social exchange and through it, individuals form strong attachments and cooperative alliances, they negotiate status differences, they soothe and calm, and they express sexual and romantic interest (Hertenstein et al. 2006a). Given the centrality of touch to social life, it is likely to be a highly gendered form of human communication.

There is a longstanding interest in exploring the magnitude and sources of gender differences in the expression of emotion. The predominant focus in this work has been on the face and voice (LaFrance, et al. 2003; Scherer et al. 2003). In the present investigation, we examine gender differences in the communication of distinct emotions via touch in humans. We do so in the context of evolutionary and constructionist theories of gender, build upon previous work published in Sex Roles, and rely upon well-tested methodology in the field of emotion. Although the sample is limited to one in the U.S., the work has implications and raises questions for the communication of emotion via touch in other cultures. In relation to this point, Table 1 includes sample characteristics for the empirical articles cited.

Table 1 Cited empirical studies with demographic of sample and location from which sample was drawn

Touch and Emotion in Human Communication

Some research indicates that touch communicates the hedonic tone of emotion predominantly, that is overall warmth or distress (Hertenstein and Campos 2001; Jones and Yarbrough 1985; Knapp and Hall 1997), or that touch intensifies the meaning of emotional displays in other modalities (Knapp and Hall 1997). Recent studies by Hertenstein and colleagues have documented, however, that touch communicates several distinct emotions between humans (Hertenstein et al. 2006a, see Clynes and Nettheim (1982) for the association of distinct states by people pressing a pressure-sensitive button). In this research, two strangers were placed in a room in which they were separated by a barrier. They could not see one another, but they could reach each other through an aperture in a curtain. One person touched the other on the forearm, in each instance trying to convey one of 12 emotions. After each touch, the person touched had to choose which emotion s/he thought the encoder was communicating by selecting a term in a modified forced-choice format (Frank and Stennett 2001).

In two different undergraduate samples from Spain and the U.S., participants accurately decoded anger, fear, disgust, love, gratitude, and sympathy at above-chance levels (Hertenstein et al. 2006a). Participants did not decode happiness, surprise, sadness, embarrassment, envy, and pride at above chance levels. Accuracy rates ranged from 48% to 83% for the accurately decoded emotions, which is comparable to those observed in studies of facial displays and vocal communication with samples around the world (Elfenbein and Ambady 2002; Scherer et al. 2003). Specific tactile behaviors demonstrated by the U.S. sample were associated with each of the emotions. For example, sympathy was associated with stroking and patting, anger with hitting and squeezing, disgust with a pushing motion, gratitude shaking of the hand, fear with trembling, and love with stroking.

Sympathy, Anger, Happiness and the Gendered Nature of Touch

The study of gender and emotion is one of the richest traditions in the field of emotion (for reviews, see Brody and Hall 2008; Citrin et al. 2004; LaFrance et al. 2003). Researchers interested in gender have focused on stereotypes (e.g., Hess et al. 2000; Robinson and Johnson 1997; Timmers et al. 2003), self-reported experience (e.g., Gross and John 1998; Hess et al. 2000; Simon and Nath 2004), verbalization of emotion (e.g., Roter et al. 2002), emotional expression (e.g., Halberstadt et al. 1988; Kring and Gordon 1998), physiology (e.g., Kring & Gordon), nonverbal decoding of emotions (e.g., Hall 1990), and constructs such as emotional intelligence (e.g. Day and Carroll 2004; Mayer et al. 2000) and emotional competence (e.g., Gohm and Clore 2000). The empirical areas of inquiry within this tradition have established several regularities: women are stereotyped as being more emotional (e.g., Plant et al. 2000); females smile more (LaFrance et al. 2003) and cry more often (Gross et al. 1994); women are more expressive of emotion in general and better at decoding emotion (Brody and Hall 2008). The present research contributes to this literature by exploring an understudied modality of emotion-communication between humans—touch.

With respect to touch in human communication, scattered studies have yielded certain regularities. Males appear to initiate touch more than females (Henley 1973; although see Stier and Hall 1984). Studies also find gender differences in the perceived valence of a touch (Hall et al. 2005; Hertenstein et al. 2006). In this research, women are more likely than men to perceive touch from opposite-gender strangers as unpleasant and an invasion of privacy. Moreover, the more women perceive a touch as sexual from a male stranger, the less they perceive the touch as warm and friendly; whereas, the more men perceive a touch as sexual from a female stranger, the more they perceive it as warm, pleasant, and friendly (for a review, see Hertenstein et al. 2006b).

In the present study, we tested predictions regarding gender differences in the accuracy with which individuals can communicate distinct emotions through touch in human communication, relying on previously published data (Hertenstein et al. 2006a). This study included the requisite four different dyad groups (encoder-decoder): female–female, female–male, male–male, and male–female. Overall accuracy across 12 emotions did not vary by gender, as reported in the original article. However, gender differences were not analyzed for each emotion separately. These aggregate analyses limited the inferences that could be made regarding gender differences in the communication of emotion via touch. In the present study, we analyzed each of the emotions separately focusing on two emotions that evolutionary and social role accounts both suggest should vary by gender: sympathy and anger. We also focused on happiness, an emotion that has shown consistent gender differences (e.g., LaFrance et al 2003).

Evolutionary and social role accounts suggest potential and consistent gender differences in the communication of sympathy and anger via touch between humans. Sympathy is a care-taking emotion that supports other-oriented, altruistic behavior (Eisenberg et al. 1989; Goetz, and Keltner 2007). Within evolutionary accounts, it is assumed that women disproportionately take on the care-taking demands of raising offspring. Within social role accounts, it is well documented that central socialization practices—parental discourse, child rearing manuals, cultural stereotypes—amplify the place of sympathy in women’s psyches (e.g., Clark 1997; Clark and Shields 1997). Both accounts suggest that women should be more likely to experience and express sympathy. Consistent with this analysis, females report experiencing more sympathy than do men (Brody and Hall 2000; Shiota et al. 2006). In the present study we predicted that when females are in an experimental dyad, sympathy will be decoded at above-chance levels.

Anger, in contrast, promotes aggression (Berkowitz 1993). Given that anger produces assertive, competitive behavior in face-to-face interactions, anger is intertwined with status contests and affordances (e.g., Tiedens and Leach 2004). Evolutionary accounts (e.g., Daly and Wilson 1994; Kenrick et al. 2004) contend that men more readily enter into confrontational encounters to rise in hierarchies and gain preferential access to mates, and should more readily experience and express anger. Guided by this theorizing, one study recently found that participants could more accurately and more quickly detect male than female angry facial displays (Becker et al. 2007). Social role accounts likewise assume that anger is a gendered emotion, one more fitting with the stereotypical roles granted to men, revolving around self-assertion, competition, and status (Kring 2000). Guided by such theorizing, it has been found, for example, that while mothers talk more about most emotions to their young daughters than their young sons—to socialize them in the ways of care-taking—they talk more with their boys about anger (Fivush et al. 2006). And in adults, it has been found that men consistently report experiencing and expressing more anger than women (Brody and Hall 2000; Kring 2000; Plant et al. 2000). We therefore predicted that dyads involving males would communicate anger with touch at above-chance levels.

Finally, one of the most consistent gender differences identified in the literature on emotion relates to the stereotypes, experience, and expression of happiness (Hess et al. 2004). Women are assumed to experience and express happiness more than men in a variety of contexts (Brody and Hall 2008; Fischer 1993; Hall et al. 2000). Stereotypes such as this have been documented as early as toddlerhood and are thought to arise from socialization practices originating from television, parental stereotypes, differential reinforcement of emotional expression, and actual observations of emotionality (Birnbaum 1984).

Researchers have documented empirical support for such stereotypes (Brody and Hall 2000); women report experiencing more happiness than men (Brody 1993) and they smile more than men (Hall et al. 2002; LaFrance et al. 2003). A number of explanations have been proposed to explain these gender differences, some of which emphasize power and status (Henley 1977, 1995; LaFrance and Henley 1994), some of which emphasize the social roles of the genders (Brody and Hall 2000; Shields 2000), and some that combine these two explanations (LaFrance and Hecht 1999; LaFrance et al. 2003). Predicated upon theory and the empirical work demonstrating that women experience and express more positive emotionality than men, we predicted that dyads comprised solely of females would communicate happiness with touch at above-chance levels.

In summary, based on theoretical and empirical evidence reviewed above, we made predictions regarding three different emotions: sympathy, anger, and happiness. More specifically, we predicted that when females are in an experimental dyad, sympathy will be decoded at above-chance levels. We also predicted that dyads involving males would accurately communicate anger with touch. Finally, we predicted that dyads comprised solely of females would accurately communicate happiness with touch.

Although the primary goal of the current study was to address the decoding of emotion via touch, a subsidiary purpose was to provide tactile signals that were used to communicate sympathy, anger, and happiness between humans. The field of emotion has been advanced by precise characterizations of emotion-specific signals in the face (Ekman 1993) and voice (Scherer, et al. 2003). The two most common coding systems for the face include Izard’s (1979) maximally discriminative facial movement coding system (MAX) and Ekman and Friesen’s (1978) Facial Action Coding System (FACS). These systems are anatomically based coding systems that require frame-by-frame video analysis of muscle movements. Researchers have also devised techniques to analyze spectrograms of vocal expressions of emotion (Scherer, et al. 2003). Researchers attend to a number of technical parameters when analyzing vocal expressions of emotion including the mean, variability, and range of frequency, as well as vocal intensity and spectral noise (Scherer, et al.).

Researchers have sometimes, but not always, relied on bottom-up descriptions of emotion signals rather than making a priori predictions of what should be observed (Ekman 1993). Kagan (2007) has called for those in the affective sciences to take more of a Baconian, bottom-up, approach given that the field is still in its early stages of development. In line with these traditions, we provide tactile descriptions of emotion and do so with a more modest coding system than those available for the face and voice. We describe the duration and most often used tactile behaviors to communicate sympathy, anger, and happiness. Our coding system includes several qualities of touch (e.g., squeezing, stroking, tapping, trembling, hitting, scratching,) and is based on a number of other systems used by researchers investigating touch in human communication (e.g., Argyle 1975; Jones and Yarbrough 1985; Weiss 1992).

Method

Participants

The sample consisted of 212 participants (106 unacquainted dyads) from a large public university who ranged in age from 18 to 40 years (M = 20.15 years, SD = 3.20). Participants received extra credit for an introductory psychology course for participating. The self-identified ethnic background of the sample was primarily Caucasian (34%), Chinese (30%), and Korean (12%). One member of the dyad was randomly assigned to the role of encoder, the other to the role of decoder. Like Banse and Scherer (1996), we use the terms encoding and decoding because they connote the research method; no inference should be made that a “code” exists in the emotional signal. The gender breakdown of the four possible dyads was as follows (encoder–decoder): female–female (n = 24), female–male (n = 27), male–male (n = 27), and male–female (n = 28) (Hertenstein et al. 2006a).

Procedure and Materials

Upon arrival, the encoder and decoder sat at a table and were separated by an opaque black curtain. The participants could neither see nor talk to each other during the experiment, to preclude the possibility that they might provide nontactile clues to the emotion being communicated. Twelve emotion words were displayed serially to the encoder on sheets of paper in a randomized order. The encoder was instructed to think about how he or she wanted to communicate each emotion and then to make contact with the decoder’s bare arm from the elbow to the end of the hand to signal each emotion, using any form of touch he or she deemed appropriate. The decoder could not see any part of the touch because his or her arm was positioned on the encoder’s side of the curtain. We restricted the location of the touch because we wanted to limit the possibility that participants would receive any cues about their partner’s height, gender, build, etc. Participants were not told the partner’s gender and all tactile displays were video recorded. After each tactile display was administered, the decoder was administered a forced-choice response sheet reading “Please choose the term that best describes what this person is communicating to you.” The response sheet contained the following 13 response options: anger, disgust, fear, happiness, sadness, surprise, sympathy, embarrassment, love, envy, pride, and gratitude, as well as none of these terms are correct, to possibly reduce artificial inflation of accuracy rates (see Frank and Stennett 2001). These emotions were listed in random order across participants (Hertenstein et al. 2006a).

Measures and Coding

The dependent measure of interest to address our hypotheses was the proportion of participants selecting each response option when decoding the tactile stimuli. Thus, for each of the 12 target emotions, the proportion that participants chose each response option was computed (e.g., the proportion of participants that chose sympathy when sympathy was intended to be communicated by the encoder). This measure is in line with a long tradition in the emotion literature in which forced-choice methodologies have been employed (e.g., Ekman 1972; Frank and Stennett 2001; Tracy et al. 2009). In addition to this measure, we asked decoders on a questionnaire at the end of the study the following: “Do you think a male or a female was touching you in this experiment?” The response options included male or female.

All of the tactile displays were coded on a second-by-second basis by research assistants who were naive to the emotion being communicated. The coding system was informed by a survey of coding systems used by researchers investigating touch (e.g., Argyle 1975; Jones and Yarbrough 1985; Weiss 1992). The specific types of touch that were coded included holding the other, squeezing, stroking, rubbing, pushing, pulling, pressing, patting, tapping, shaking, pinching, trembling, poking, hitting, scratching, massaging, tickling, slapping, lifting, picking, finger interlocking, swinging, and tossing (i.e., tossing the decoder’s hand). In addition, the duration that each encoder touched the decoder for each emotion was calculated. Interrater agreement on all of the codes, based on 20% overlap in coders’ judgments, ranged from .83 to .99.

Results

Decoding of Emotions

In Table 2, we present the proportion of participants choosing the two most common response options for each of the target emotions of interest—sympathy, anger, and happiness. We display these data for each of the four gender dyad combinations. For example, amongst the all-male dyads, 70.4% of the decoders chose anger and 14.8% chose fear in the condition in which anger was attempted to be communicated by the encoder.

Table 2 Mean recognitions rates (%) for the anger, happiness, and sympathy tactile conditions

For each of our predictions, we conducted four binomial tests—one for each of the four possible gender dyad combinations—for each emotion of interest (i.e., sympathy, anger, and happiness). Specifically, the proportion of participants who chose each response option was assessed against chance for all of the target emotions. Following Frank and Stennett (2001), we set chance at 25% (for a rationale, see Hertenstein et al. 2006a). This strategy allowed us to test if a given gender dyad type was capable of communicating a given emotion at above chance levels. In the example described above, the binomial tests indicated that anger was chosen at above-chance levels, whereas fear was not.

The data presented in Table 2 support our hypotheses. Our first prediction was that when females are in an experimental dyad, sympathy would be decoded at above-chance levels. Supporting this hypothesis, sympathy was communicated at greater-than-chance levels only when a female comprised at least one member of the dyad (on average, 62% accuracy). Importantly, the second most commonly chosen response option for all of these dyad types never exceeded chance levels. Dyads consisting solely of males did not communicate sympathy at above chance levels.

Our second prediction was that dyads involving males would communicate anger with touch at above-chance levels. Consistent with this hypothesis, anger was communicated at greater-than-chance levels only when a male comprised at least one member of the dyad (on average, 62% accuracy). Moreover, the second most commonly chosen response option for all of these dyad types did not exceed chance. Dyads consisting only of females did not communicate anger at above chance levels.

Finally, we predicted that dyads comprised solely of females would communicate happiness with touch at above-chance levels. Supporting this hypothesis, only dyads consisting solely of females communicated happiness and the secondly most common response option, gratitude, was not above chance levels. This finding on happiness dovetails with studies showing that women smile more (LaFrance et al. 2003), share emotions more (Rimé et al. 2002), and experience more prosocial emotions (Shiota et al. 2006). It should be mentioned that the other target emotions investigated were either communicated by all of the four dyad types at greater-than-chance levels (fear, disgust, love, gratitude), or none of the four dyad types (sadness, surprise, embarrassment, envy, and pride).

It is clearly possible that decoders could reliably infer the gender of the encoder, perhaps from the quality of the touch administered. This categorization, furthermore, could have influenced their judgments of the emotion-related touches (Hess et al. 2004). As indicated, the gender of the encoder was not verbally revealed to the decoder. However, was it possible for decoders to ascertain encoders’ gender via touch? To address this question, we computed the percentage of cases in which decoders accurately inferred the gender of the encoder. Setting chance at 50%, we conducted binomial tests and found that 79% of female decoders correctly identified male encoders and 96% correctly identified female encoders (both ps < .01). For male decoders, 70% (p = .052) correctly identified male encoders and 81% (p < .01) correctly identified female encoders. These results indicate that decoders were capable of accurately decoding the encoder’s gender.

Encoding of Emotions

Did the gender composition of the dyads influence the tactile actions associated with the communication of the different emotions? In Table 3, we present the average durations of tactile contact in the four dyads for the emotions of interest. One-way omnibus ANOVAS were performed for the three emotions of interest entering the gender dyad type (4 levels) as the independent factor and the duration of time that elapsed for the tactile behaviors as the dependent variable. The duration of tactile behaviors did not differ between the dyad gender types for the emotions (all ps > .05). Post-hoc pair-wise comparisons were conducted for each emotion to examine whether there was any difference in duration between any two gender dyad combinations. These analyses yielded no statistically significant results (all ps > .05). These analyses indicate that the observed gender differences in the accuracy of communicating sympathy, anger, and happiness could not be attributed to differences in how long participants made tactile contact.

Table 3 Duration of tactile behaviors (in seconds) that were accurately decoded as a function of dyad gender

Table 4 presents data relevant to the more specific patterns of tactile behavior that women and men relied on to communicate the emotions of interest. The values reported indicate the percentage of time encoders used each quality of touch as a function of the total time touched for the trial. Here, one finds differences that might explain the decoding differences reported earlier. Sympathy was associated most with patting for all dyads, but the second most common behavior for all male dyads was shaking, whereas this was not the case for the other dyads. Turning to anger, one finds that this emotion was associated with squeezing for all dyad groups except all female dyads, the group that did not communicate anger at above-chance levels. In addition, pushing was not one of the most frequent types of touch for the male–female group, whereas it was for the other dyad types. Finally, finger interlocking was in the most frequently employed tactile behaviors except the group that accurately communicated happiness—the group comprised solely of females. Patting, however, was one of the most frequent types of touch used by the all female dyad, whereas this was not true for the other dyad types.

Table 4 Percentage of most frequent types of touch used that were accurately decoded as a function of dyad gender

No formal hypotheses were proposed for the tactile encoding behaviors and there were a large number of touch qualities coded making inferential analyses unwieldy and difficult to interpret. Moreover, it is important to note that variability in the types of touch used by the dyads was marked. For these reasons, great inferential caution should be exercised when considering the links between encoding behaviors and decoding accuracy, as well as making comparative statements between the gender dyad types. Moreover, the great variability in the data obviates against there being a “prototype” expression of emotion as is often implicated in the face and voice (Ekman 1993); instead, there seems to be a multitude of ways in which emotion can be communicated via touch between humans.

Discussion

Here, we documented gender differences in the communication of distinct emotions via touch between humans. Guided by evolutionary and social role accounts of emotion, as well as the empirical literature, we hypothesized that women would be able to communicate sympathy and happiness through brief touches to the arm of a stranger, whereas men would be able to communicate anger. The data from the present study supported these predictions. We observed no gender-related differences in the communication of disgust, fear, envy, embarrassment, sadness, pride, love, and gratitude.

Sympathy was only communicated accurately through tactile contact to the arm in dyads comprised of at least one female; dyads consisting of only males communicated sympathy at less-than-chance levels. This result is consistent with studies documenting gender differences in self-reports of compassion (Shiota et al. 2006), self-reports of empathy (Eisenberg and Lennon 1983), and interests in care-taking (Gilligan 1982). Whether similar gender differences in the communication of sympathy would be observed in studies of emotion-related facial display or vocalization is an important question, and one that would more fully characterize the extent to which women enjoy an advantage in communicating the quintessential care-taking emotion—sympathy.

Anger, in contrast, was communicated accurately only when the dyad contained at least one male; dyads comprised of only females communicated anger at less-than-chance levels. Interestingly, the most accurate dyad groups were those comprised solely of males. These findings dovetail with the well documented tendency for men to show more aggressive behavior than women (Daly and Wilson 1994), stereotypes of men as more angry, and recent evidence generated by Becker et al. (2007) in the realm of facial displays finding that humans’ perceptual systems are tuned to be particularly sensitive to angry facial expression by males. In their research, participants more quickly and accurately classified the word angry with male faces than female faces, and more quickly and accurately judged angry faces when they were displayed by males than females.

Finally, we found that the gender composition of the dyads also affected the communication of happiness. That is, only dyads comprised solely of females communicated happiness at greater-than-chance levels. As mentioned, this finding dovetails with studies showing that women smile more (LaFrance et al. 2003), share emotions more (Rimé et al. 2002), and experience more prosocial emotions (Shiota et al. 2006). The data are also consistent with Becker et al.’s (2007) work indicating that participants (a) thought of female facial displays more often than male displays when asked to spontaneously generate a mental image of a happy face, (b) more quickly and accurately classified the word happy with female faces than male faces, (c) more quickly and accurately judged happy faces when they were displayed by females than males, and (d) perceived faces as more happy when they were feminized.

The present study adopted the design of some traditional emotion recognition investigations in the field. Given that the study was not a true experiment, causal inferences must be made cautiously. However, several features of the paradigm increase our confidence in the findings (Hertenstein et al. 2006a). In most previous judgment studies, observers judged highly prototypical displays or those posed by actors, whereas in our study people decoded emotion from the idiosyncratic tactile actions of other untrained participants (see Hertenstein 2010 for a discussion regarding bottom-up approaches to emotion; also, see Clynes and Nettheim (1982) for a unique approach to studying button-pressing and the association of some emotions). Second, our response format included the response option none of these terms are correct, which reduced the likelihood of inflated accuracy rates (Frank and Stennett 2001). Finally, we restricted the tactile stimulation to one location on the body, thus eliminating one aspect of tactile communication—location on the body of the touch recipient—that is likely to provide additional information with respect to the emotion communicated.

What might explain the findings we observed in terms of gender and touch? Because decoders could not see the “tactile interaction” that transpired on the other side of the screen, decoders must have relied upon tactile cues alone to ascertain the gender of the encoder. Our data indicated that decoders accurately perceived encoders’ gender between 70% and 96% of the cases depending on the specific gender-dyad composition. As a result, it is possible that decoders may have been interpreting the encoder’s touch by means of gender-stereotypes (Hess et al. 2004). If this were occurring, even at non-conscious levels, stereotypes may have inflated accuracy for the emotions most readily associated with gender stereotypes like anger and sympathy. According to Hess et al., women are expected to display sadness more and men to display more anger. Indeed, when subjects rated the likelihood of neutral facial displays to exhibit various emotions, women’s faces were expected to display more stereotypic female emotions whereas men’s faces were expected to show more male stereotypic faces (Hess et al. 2007). Although our study was not designed to investigate whether these findings held in the tactile modality, similar processes may underlie our results. This points to a future area of investigation.

It is also possible that decoders’ knowledge of the encoders’ gender may play a role in the meaning that they attributed to touch because of stereotypes (Brody and Hall 2008). For example, decoders may be more likely to interpret a particular tactile gestalt from a female as sympathy whereas decoders may interpret the same tactile gestalt from a male as a different emotion. These explanations are consistent with stereotype theorists (e.g., Biernat 2003), as well as empirical studies (e.g., Hess et al. 2000) which indicate that membership in a stereotyped group—in this case, the gender of both the encoder and decoder—can drive ambiguous perceptions in the direction of the stereotypes.

The above explanations focus on the decoder, but the tactile behaviors used by the encoders may well lead to the observed gender differences in perception. There is evidence of gender differences in the behaviors used by encoders to communicate emotion (Hertenstein et al. 2009). In the current study, the gender of decoders was never verbally revealed to encoders by the experimenter. However, given that encoders administered the tactile stimulation on their side of the opaque screen to the decoders, the encoders were able to visually see the morphology of the decoder’s arm. Although we did not ask encoders at the end of the study whether they believed they were touching a male or female decoder, we think it likely that encoders knew the gender of decoders. It is possible that the gender of the decoder may have influenced the tactile behaviors used by the encoder to communicate the emotions. Moreover, the gender of the decoder may interact with the gender of the encoder to influence the demonstrated behavior. Indeed, there is evidence of this in Table 4. For example, dyads comprised entirely of males squeezed each other 40% of the time, whereas males squeezed females less than half of this time in the anger condition. Also noteworthy in this condition is that squeezing behavior in dyads comprised entirely of females was not in the five most common types of touch and was only evident 14% of the time in dyads comprised of female encoders and male decoders. Overall, there was evidence that the target’s gender influenced the behavior of the encoder and that the gender of both participants interacted. Again, this is consistent with the literature indicating that group membership norms—in this case gender—for displaying particular emotions influences both the decoding and encoding of emotions (Kirouac and Hess 1999).

In sum, our study documents gender differences in the communication of distinct emotions between humans via touch. Studies have shown that females more accurately identify the meaning of a variety of non-verbal cues, including expressions of emotion (Brody and Hall 2008). However, Brody and Hall persuasively argue that the goal for current researchers is to identify and document specific variables that moderate and mediate gender differences evident in non-verbal communication. Indeed, our study demonstrates that there is not an overall female advantage in encoding and decoding emotion as is sometimes suggested, but that it is emotion specific in the tactile modality.

Our study suggests a number of important directions for future research. First, it will be important for research to identify the sources of the gender differences in the communication of emotion that we observed. We examined whether these differences might be related to differences in tactile behaviors. Experimental studies could directly document whether these behavioral differences produce differences in the decoding of sympathy, anger, and happiness. Second, studies of stratified and egalitarian cultures with respect to gender could more explicitly address whether these gender differences in sympathy, anger, and happiness hold across cultures where the gender roles are more differentiated or not (Wood and Eagly 2002). This kind of research will document the deeper origins of likely gender differences in the communication of emotion via touch. Third, future research should examine why some paradigms used to study tactile human communication do not yield gender differences, at least in the decoding of emotion (e.g., Bailenson et al. 2007; Hertenstein et al. 2009). Fourth, investigations specifically designed to uncover potential gender differences in how emotions are communicated via haptic devices (e.g., Bailenson et al. 2007; Smith and MacLean 2007) and button-pressing (e.g., Clynes and Nettheim 1982) would be valuable. Finally, it will be important to examine possible gender differences from a developmental perspective (Hertenstein 2002). Our study contained a few participants that extended the age range of our sample which may have influenced the findings (4 members over 30-years-old). It will be important for future research to systematically examine how age interacts with gender in the communication of emotion via touch.