Journal of Nonverbal Behavior

, Volume 36, Issue 1, pp 1–21

How Do We Communicate About Pain? A Systematic Analysis of the Semantic Contribution of Co-speech Gestures in Pain-focused Conversations

Authors

    • School of Psychological SciencesUniversity of Manchester
  • Judith Holler
    • School of Psychological SciencesUniversity of Manchester
    • Max Planck Institute for Psycholinguistics
  • Donna Lloyd
    • School of Psychological SciencesUniversity of Manchester
  • Alison Wearden
    • School of Psychological SciencesUniversity of Manchester
Original Paper

DOI: 10.1007/s10919-011-0122-5

Cite this article as:
Rowbotham, S., Holler, J., Lloyd, D. et al. J Nonverbal Behav (2012) 36: 1. doi:10.1007/s10919-011-0122-5

Abstract

The purpose of the present study was to investigate co-speech gesture use during communication about pain. Speakers described a recent pain experience and the data were analyzed using a ‘semantic feature approach’ to determine the distribution of information across gesture and speech. This analysis revealed that a considerable proportion of pain-focused talk was accompanied by gestures, and that these gestures often contained more information about pain than speech itself. Further, some gestures represented information that was hardly represented in speech at all. Overall, these results suggest that gestures are integral to the communication of pain and need to be attended to if recipients are to obtain a fuller understanding of the pain experience and provide help and support to pain sufferers.

Keywords

Pain communicationCo-speech gesturesSemantic feature analysisGesture–speech interaction

Introduction

Background

Pain is a sensation that we experience throughout our lifetime and motivates us to seek help from doctors more than any other symptom (Crook et al. 1984; Hurwitz 2003). Further, we are frequently driven to communicate our pain to others, whether to receive explanation, sympathy and understanding, or treatment and support (Ehlich 1985; Hyden and Peolsson 2002; Prkachin et al. 1994). However, in many instances, and especially in the case of chronic or prolonged pain, there is no visible sign of injury or damage (such as a wound) that can become the focus of communication (Goubert et al. 2005; Heath 2002). Thus, the question of how this internal experience can be translated from the private world of pain into the external public domain is of great importance. Not only are effective pain management and patient satisfaction directly linked to effective doctor-patient communication about pain (Arnold 2003; McDonald and Molony 2004; Puntillo 2003) but more importantly, the inability to communicate pain adequately can leave pain sufferers feeling isolated and frustrated (Frank 1991; Scarry 1985).

Pain Communication

Despite the need to share our pain with others, there are a number of barriers to successful pain communication in both everyday and medical settings. Firstly, although verbal self-report is the dominant means of pain communication, there is no external referent for the sensation of pain, thus precluding the existence of a generally established language or vocabulary that is adequate for its description (Ehlich 1985; Ryle 1949; Wittgenstein 1953). Moreover, this absence of an external referent for pain makes the use of everyday language and analogy problematic due to the inherent risk of misunderstanding and uncertainty in describing pain (Hyden and Peolsson 2002; Schott 2004). Because pain cannot be accessed and verified by another person it is impossible to know whether identical verbal descriptions of pain (e.g., a sharp, stabbing pain) given by different individuals, or even by the same individual at different points in time, refer to the same underlying sensation.

Previous research has attempted to supplement the problematic verbal communication of pain with information obtained from alternative methods such as pain rating scales and questionnaires (e.g., Carlsson 1983; Melzack 1975, 1987) and observational indicators of pain, such as facial expression (e.g., Craig 1992; Prkachin 1992; Prkachin and Craig 1995). There is some evidence to suggest that rating scales are useful for the assessment of specific aspects of pain (such as intensity) across different points in time (Joyce et al. 1975) and for distinguishing between specific types of pain within certain syndromes (e.g., between types of phantom limb pain, Crawford 2009). However, these tools constrain the pain description to a limited number of predetermined categories and descriptors that may not directly map onto the actual experience or capture all aspects of it (Bergh et al. 2005; Spiers 2006). Further, the items and responses require interpretation by the patient and physician, respectively and are therefore subject to the same problems as spontaneous verbal descriptions (Schott 2004). Meanwhile, although observational measures, such as facial expression are useful for determining the presence of pain, they are often generic across different types of pain, largely absent during chronic or less intense pain, and observers often underestimate pain intensity based on these behaviors (Hadjistavropoulos et al. 1996; Prkachin 1992; Prkachin et al. 1994; Prkachin and Craig 1995; Wilkie 1995).

Co-speech Hand Gestures

One form of communication, co-speech hand gestures, may help us to overcome some of the problems in understanding and assessing pain. Co-speech gestures are produced naturally and spontaneously during speech and involve movements of the hands, arms, and occasionally the whole body (Kendon 2004; McNeill 1985, 1992). Moreover, these gestures are temporally, pragmatically, and semantically linked with speech and the two modalities jointly contribute to the expression of meaning (Kendon 2000, 2004; McNeill 1992, 2005). Thus, the consideration of both the verbal and gestural components of an utterance can allow a fuller insight into a speaker’s thoughts and the message that is being conveyed (McNeill 1992). The fact that certain types of co-speech gestures are imagistic depictions that embody, and thus externalize meaning suggests that they may be of particular utility in conveying information about the internal, bodily experience of pain.

Co-speech gestures, particularly those that are semantically linked to speech (often referred to as representational gestures), can convey a wide range of information about both concrete and abstract concepts and can be used to refer to actual or fictive entities and events in the physical or imagined environment of the speaker (McNeill 1992). Of particular importance here is the finding that representational gestures, particularly iconic gestures, can contain semantic information that is not contained in speech at all, thus contributing unique information to the communication process (Alibali et al. 1997; Beattie and Shovelton 1999a; Emmorey and Casey 2001; Holler and Beattie 2003a; Kelly and Church 1998; McNeill 1992).

Co-speech gestures can also clarify the verbal component of the message and compensate for problems in verbal communication (Holler and Beattie 2003a, b). For example, speakers tend to convey more information through gestures than speech when discussing visual or spatial information, which is difficult to communicate verbally (e.g., Bavelas et al. 2002; Emmorey and Casey 2001; Graham and Argyle 1975; Kendon 1985). Here, gestures may be more suited to the representation of this information as they are visuo-spatial in nature and create visible images in the shared external space. However, this is just one of the many functions gestures can fulfill (see Kendon 1985) and does not mean that gestures are to be considered as a secondary communication channel which is predominantly compensatory in nature. Rather, speech and co-speech gestures are considered to be of equal importance in the process of communication, with the two modalities complementing each other according to their relative strengths and limitations in terms of how they represent information.

There is also considerable evidence that co-speech gestures aid addressees’ comprehension. For example, addressees are significantly more accurate at recalling and recounting information, understanding indirect requests, and identifying target objects and actions when information is present in both speech and gesture compared to speech alone (Beattie and Shovelton 2001; Cohen and Otterbein 1992; Feyereisen 2006; Galati and Samuel 2011; Graham and Argyle 1975; Kelly 2001; Kelly et al. 1999; Riseborough 1981; Rogers 1978). Moreover, research using paradigms from the field of neuroscience, such as ERPs and fMRI, has provided evidence that our brain semantically integrates the information represented in gesture and speech (Holle et al. 2008; Kelly et al. 2004; Özyürek et al. 2007; Willems et al. 2009; Wu and Coulson 2007).

Taken together, these findings suggest that not only do co-speech gestures contain a significant amount of semantic information but also that addressees are able to use this information to improve their understanding. Thus, given the difficulties associated with the verbal communication of pain, these findings suggest that co-speech gestures may make an important contribution to the communication of pain. In particular, they may convey information about the pain experience that is not contained in speech, as well as clarify the meaning of the pain description, thus preventing misinterpretation and facilitating a better understanding of the experience.

Co-speech Gestures and Pain Communication

Qualitative studies of gesture use in pain communication suggest that people employ co-speech gestures in various ways when talking about pain, both in genuine medical consultations and in pain-focused interviews with researchers (Heath 1984, 1989, 2002; Hyden and Peolsson 2002). Participants used gestures to identify the location of the pain, both by pointing directly and by performing gestures around the site of the pain; to demonstrate the cause of the pain by miming actions that cause pain, and to convey semantic information about aspects of pain such as quality, for example by tapping their fingers lightly on the palm of their hand to denote a tingling sensation (Heath 1986, 1989, 2002; Hyden and Peolsson 2002). Although this indicates that gestures may contribute important information about pain, these studies were based on descriptive, in-depth analyses of individual gestures. While this is an entirely valid approach for understanding the role of gesture in the sequential organization of pain-focused talk, as well as the informative value of gestures in particular instances, it does not allow us to draw general conclusions about the communicative contribution of gestures during pain descriptions or to estimate the significance of their overall contribution. We currently lack detailed analysis of the use of co-speech gestures during pain-focused encounters. As such, a more systematic study into the kinds of information co-speech gestures represent, their proportional contribution to the overall message, and the ways in which gestures are related to speech is needed in order to fully understand how people communicate about pain.

A plethora of research suggests that nonverbal behavior plays an important role within doctor-patient communication and relationships, particularly in terms of communicating emotion and providing clues to suffering and distress (see Roter and Hall 2006 for a comprehensive review of this area). While the communication of emotion and distress is undeniably important within medical interactions, research has not yet explored the other functions that nonverbal communication may serve within medical interaction and thus may underestimate the importance of these aspects of communication in this setting. The present research aims to extend on this work by highlighting the potential contribution of a particular type of nonverbal communication, representational co-speech hand gestures, to the communication of semantic information about pain, an aspect of nonverbal communication that has been relatively neglected within the medical interaction literature.

A combined approach, with elements of both qualitative and quantitative analysis, will be used to systematically investigate how people use co-speech gestures when talking about a pain experience. To accomplish this we will use an established methodology, which has been applied to measuring the semantic content of gestural representations and their interaction with speech in both comprehension and production studies (Beattie and Shovelton 1999a, b, 2001; Gerwing and Allison 2009; Gullberg and Kita 2009; Holler and Beattie 2002, 2003a; Holler et al. 2009; Holler and Stevens 2007). If co-speech gestures are found to play a crucial role in the communication of pain, this will be of great significance to any real world setting in which the understanding of pain matters. This is particularly important in light of the finding that during medical consultations physicians spend a considerable amount of time looking at patient notes and often orient both posture and gaze towards the computer screen on which medical records are displayed, such that patients are sometimes outside of the physician’s visual field altogether (Hartzband and Groopman 2008; Heath 1984, 1986; Makoul et al. 2001; Margalit et al. 2006; McGrath et al. 2007; Ruusuvuori 2001).

Moreover, an investigation of this type would considerably advance our knowledge of co-speech gestures, as previous research has almost exclusively focused on descriptions of rather concrete stimuli (such as cartoons and spatial images). While it is known from this work that co-speech gestures are good at encoding certain semantic aspects (e.g., relative position and size information), as of yet we have no idea whether this holds for other topics of talk. Pain talk is of particular interest here because communicating this type of private, visually inaccessible information that is nevertheless based on concrete, perceptual experience, crosses the boundary between concrete and abstract information. Hence, investigating this new domain of everyday talk is relevant from two perspectives, to advance our understanding of pain communication as well as that of co-speech gesture production.

Method

Participants

Twenty female undergraduate psychology students from the University of Manchester participated in return for course credit. Recruitment took place using posters and email announcements requesting participants who had experienced an episode of pain within the previous 2 weeks. It was necessary to exclude the data from two participants: one participant did not follow the instructions and the data from the other could not be analyzed due to problems with the recording equipment. Of the 18 participants included in the analysis, 17 were right handed (according to the Edinburgh Laterality Inventory, Oldfield 1971), 16 were native English speakers (2 were fluent English speakers with German as their first language),1 and none had previously been diagnosed as language impaired. The mean age of the sample was 21.72 years (SD = 5.04; Range = 19–36 years). The pain episodes described by participants were headache (n = 4), toothache (n = 2), body pain (e.g., back/shoulder/arm; n = 6), stomach pain (n = 4), blister (n = 1), and tattoo (n = 1).

Procedure

Participants took part in a semi-structured interview in which they talked to the researcher (SR) in detail about their recent pain experience. A standard conversational setting was used in which the researcher and participant sat opposite each other in chairs at a comfortable distance across a table. Prior to the interview, the researcher informed participants that they were to be videotaped throughout the procedure. To prevent participants from becoming unnaturally aware of their hand gestures the researcher explained that the focus of the study was on how people talk about pain rather than specifically on gesture use. The interviews lasted between 4.5 and 18 minutes (M = 9.44, SD = 3.87) and were recorded split-screen using two wall mounted cameras giving frontal views of both the participant and the researcher.

We developed the interview questions in line with those usually used within pain assessment settings (Harré 1991; Hurwitz 2003) and the questions were designed to tap into different aspects of the pain experience including the nature, intensity and location of the pain, emotional response to the pain, beliefs about the cause of the pain, and ability to control the pain.

Following the interview, we debriefed participants about the purpose of the study and reminded them of their freedom to withdraw. All participants allowed their data to remain in the study and when questioned none of the participants indicated that they had guessed that the purpose of the study was to investigate the use of hand gestures.

Analysis

Segmentation of speech units

We transcribed all speech verbatim and checked the transcripts for accuracy before segmenting the transcribed speech into ideation units (segments of speech that express an idea; Butterworth 1975). We chose to use ideational rather than clausal speech units because the focus of the analysis was on the relative contributions of speech and gesture in conveying semantic information about the experience of pain (see Butterworth 1975; Holler and Beattie 2002), thus it seemed more sensible to segment speech according to semantic rather than grammatical considerations.

Gesture identification and classification

Movements of the hands and arms, and in some cases the whole body, were classified as co-speech gestures if they were temporally linked with speech in a semantic or pragmatic manner, and interpretable as part of the speaker’s communicative message (see McNeill 1992, for a more detailed explanation of the concepts of temporal, pragmatic and semantic synchrony). Movements that were not connected to the speech in this way and did not appear to form part of the intended communicative message, such as self-touching, object manipulations or posture shifts were not considered to be co-speech gestures (Goldin-Meadow 2003; Kendon 1997; Knapp and Hall 2010) and were excluded from the analysis.

Next, we classified all co-speech gestures into the categories of ‘representational’ or ‘non-representational’ gestures. Representational gestures (also called topic gestures; Bavelas et al. 1995; Bavelas et al. 1992) were defined as those that were directly related to the semantic content of speech (Alibali et al. 2001; Jacobs and Garnham 2007). These included iconic gestures (e.g., using the hand to convey the idea of a bag strap pulling down on the shoulder) and metaphoric gestures (e.g., a gesture that moves diagonally upwards to convey the idea of pain intensity increasing over time; McNeill 1992). Also included here were concrete and abstract pointing gestures, for example pointing to the location of pain on the body or to an abstract concept in the gesture space (McNeill 1992). Another subclass of representational gestures that emerged and were included here were ‘abstract-descriptive gestures’ (Hyden and Peolsson 2002); these were imagistic and semantically related to speech but contained information which could not be visually accessed (i.e., about the personal, subjective experience of pain; e.g., a gesture denoting a throbbing pain) meaning that they could not be classified as iconic according to McNeill’s (1992) definition. The separate subcategories of representational gestures were initially identified and coded but were collapsed into the category of representational gestures for the purpose of analysis. Non-representational gestures were defined as those that were not directly related to the semantic content of speech but were instead related to the discourse structure and the regulation of conversation as a social system. These included interactive gestures (such as those used to request information or offer the turn of speaking; Bavelas et al. 1995; Bavelas et al. 1992), beats (biphasic movements relating to emphasis and rhythm; McNeill 1992), and gestures that serve pragmatic functions (such as the oscillating hand movement used to indicate uncertainty; Kendon 2004).

Overall, we identified and classified 1759 gestures into one of these categories. To check the reliability of the identification and classification of these movements, two judges, one of whom was blind to the study aims, independently identified and classified (in two separate, independent steps), all gestures exhibited by three randomly chosen participants constituting 10% of all gestures (n = 182). Cohen’s Kappa was used to determine the reliability of this identification as it takes into account the chance agreement expected between the two judges (Cohen 1960). Percentage agreement for gesture identification was 93%, signaling a high level of agreement between judges. Cohen’s Kappa was K = .84 for gesture classification which corresponds to a high level of agreement between the judges (Landis and Koch 1977, p. 165). The two judges discussed and resolved all discrepancies.

Semantic Feature Analysis

The gestural data included in this stage of the analysis consisted exclusively of gestures that contained semantic information about the pain experience, i.e., representational gestures. A total of 757 gestures and 2,184 speech units were included in the analysis.

To assess the type and amount of semantic information about the pain experience contained in gesture and speech we used a semantic feature analysis (see Beattie and Shovelton 1999a, b; Beattie and Shovelton 2001; Gerwing and Allison 2009; Holler and Beattie 2002, 2003a). Rather than applying a predetermined set of semantic categories, this methodological approach involves the creation of appropriate categories based on the range of information contained in a dataset. Eight semantic categories were empirically derived from the present corpus (see Table 1).
Table 1

Definitions of the eight empirically derived semantic features

Category

Definition

Location

Information about where the pain was located

Size

Information about the perceived or actual size of the body area affected by the pain

Quality

Information about the sensation of the pain; how the pain feels or what it is like

Intensity

Information about the strength or intensity of the pain

Duration

Information about the duration of the pain and/or the progression or evolution of pain either within an episode of pain or over time

Cause

Information about the actual, perceived or possible causes of the pain

Effects

Information about the various effects and consequences of the pain, including physical, emotional and social consequences

Awareness

Information about the participants’ awareness of the pain and/or the presence of the pain

We employed a binary coding scheme to score the gesture and speech units independently according to the each of the eight semantic categories shown in Table 1. The individual gestures and speech units were assigned a score of 0 if information about the semantic feature was not explicitly provided and a score of 1 if the information was explicitly provided, thus each gesture and each speech unit included in the analysis was assigned eight separate scores, one for each semantic category. An intermediate category for ambiguous information (e.g., Holler and Beattie 2003a) was not included here as for the purpose of the present analysis we were interested only in the information represented explicitly in speech and gesture. To prevent the semantic information contained in one modality influencing the scores assigned to the other modality this analysis was conducted independently for speech and gesture; speech units were scored based on the transcriptions and audio-recordings, while gestures were scored with the sound turned off.

See Appendix 1 for examples of speech and gestures containing information for each of the eight semantic features. An example of the way in which the information in gesture and speech was scored is provided below (speech is marked with ‘single quotes’ with the part of the speech that was accompanied by the whole gesture marked using [square brackets]; the accompanying gesture is described under the speech and also contained within [square brackets]):

‘It feels like [they’re just sat there with like a hammer, hitting me], that’s how it feels’

[Right hand held next to head near temple, the fingertips are held against the thumb and facing towards the head. Hand moves rapidly backwards and forwards as if hammering against the head]

Here the speech would be assigned a score of 1 for the category quality (hammering) and scores of 0 for the categories of location, size, intensity, duration, cause, effects, and awareness. The gesture would be assigned scores of 1 for location (right hand side of the head, near the temple), quality (repetitive hammering, as shown by movement of hand), and size (small, localized pain, as denoted by the small area created by the thumb and fingertips being held together) and scores of 0 for intensity, duration, cause, effects, and awareness.

To assess the reliability of the semantic feature scoring two independent judges, one of them blind to the experimental hypotheses, coded all gestures and speech units containing semantic information that were produced by the same three participants (amounting to 12% of gestures included in the analysis, n = 87; and 10% of speech units, n = 210) according to the eight semantic categories. The individual Cohen’s Kappa values for the individual semantic features ranged between K = .81 and K = 1.00 for gesture and between K = .66 and K = .96 for speech. The overall Cohen’s Kappa values were K = .91 for gesture and K = .85 for speech. According to Landis and Koch (1977), these values indicate at least substantial, and in many cases high levels of agreement. Again, the two judges discussed and resolved all discrepancies.

Results

The first stage of the analysis describes the rate of gestures in our corpus. The next stage sought to explore the amount and type of semantic information represented in gestures and the accompanying speech units (i.e., relating to the same semantic idea) during pain communication. An additional analysis of these patterns was also conducted with all the units of speech included (i.e., including also those ideational units not accompanied by representational gestures). The final stage focuses on the semantic interplay between gesture and the accompanying speech during pain communication and consists of an analysis of the distribution of information across speech only, gesture only, or speech and gesture together. The comparisons at each stage consider all semantic categories combined as well as individually. Both parametric (repeated measures t tests) and non-parametric tests (Friedman’s tests and Wilcoxon Signed Ranks tests) were used in accordance with the results yielded by Shapiro–Wilk tests. A Bonferroni correction for multiple comparisons was applied where appropriate; otherwise, an alpha level of .05 was employed. All tests applied were two-tailed.

Rate of Co-speech Gestures during Pain Communication

Overall, gestures were produced at a mean rate of 7.74 gestures per 100 words (SD = 5.09, Range = 3.82–26.28; gestures per minute: M = 14.78, SD = 2.86, Range = 2.61–13.92). Representational gestures, the focus of the semantic feature analysis, accounted for 43% of the gestures in the present sample. Representational gestures were produced at a mean rate of 4.53 gestures per 100 words (SD = 1.43, Range = 1.35–7.03; gestures per minute: M = 8.70, SD = 2.72, Range = 1.97–13.34), indicating that they were a frequent occurrence during pain descriptions.

Representation of Semantic Information in Representational Gestures and Speech

Given the large number of representational gestures produced during pain communication, we investigated the amount and type of information contained in these gestures. The first step involved comparing the overall information they contain to that represented in the accompanying speech (i.e., those portions of speech considered to be part of the same ideational units as the gestures) to gain more insight into the significance of their contribution. As indicated in Table 2, the results revealed that overall (i.e., when collapsing across all eight semantic categories), significantly more units of information were represented in gesture than in speech, t(17) = 4.03, p = .001, with gesture accounting for 57% (974 out of 1,724) of information units overall.
Table 2

Descriptive statistics, percentages and results of statistical comparisons of the number of information units represented in gesture and the accompanying speech for the individual semantic features

Semantic feature

Mean (SD) [%]a

[% of information in gestures and speech]

Paired t test

Gesture

Speech

t value

df

p value (2-tailed)

Location

22.17

(13.78)

[74%]

7.67

(4.47)

[26%]

5.51

17

.001*

Quality

7.33

(3.38)

[52%]

6.89

(2.81)

[48%]

0.82

17

.425

Intensityb

0.00

(6.00)

[26%]

3.00

(8.00)

[74%]

  

.001*

Sizeb

8.50

(21.00)

[92%]

0.00

(4.00)

[8%]

  

.001*

Effectsb

3.50

(9.00)

[38%]

5.00

(16.00)

[62%]

  

.002*

Duration

4.83

(2.90)

[41%]

6.94

(3.69)

[59%]

5.23

17

.001*

Causeb

0.50

(20.00)

[37%]

3.00

(21.00)

[63%]

  

.002*

Awareness

2.50

(2.23)

[32%]

5.22

(4.67)

[68%]

3.21

17

.005*

Total

54.11

(27.74)

 

41.67

(20.45)

 

4.03

17

.001*

* Denotes a significant p value

aPercentages reflect the amount of information represented in each modality for each semantic feature, i.e., 74% of information about location was contained in gestures while 26% was contained in speech

bDue to the non-normality of the data a Wilcoxon Signed Ranks test was used and the figures therefore represent Median (and Range). Intensity: z = 3.43, N-ties = 15. Size: z = 3.63, N-ties = 17. Effects: z = 3.04, N-ties = 15. Cause: z = 3.08, N-ties = 12

Analyses considering the individual semantic features revealed that significantly more information about the location and size of the pain was represented in gestures than in speech. Conversely, significantly more information about the intensity, effects, duration, cause, and awareness of pain was contained in speech than in gesture. There were no significant differences for information about the quality of the pain contained in speech or gesture (see Table 2).

In line with previous research (e.g., Holler and Beattie 2003a) the above analyses have only taken into account the distribution of information across representational gestures and the accompanying speech, that is, they have been based on gesture-speech ensembles, determined as a gesture and the portion of the accompanying speech that expresses the same semantic idea. Another way to shed light on the communicative contribution of gestures is to consider the entire range of information contained in speech (as captured by the present semantic coding scheme), including those segments of speech that were accompanied by non-representational gestures or were not accompanied by gestures at all. Such an analysis would allow us to draw additional conclusions about the overall communicative contribution of gestures in pain-focused talk. We therefore conducted further analyses with these data included.

As would be expected, when all speech units are considered, speech (M = 67.33, SD = 25.56) contributes significantly more information than gesture (M = 54.11, SD = 27.74), t(17) = 3.92, p = .001. Despite this, a considerable amount of information was still represented in gestures, with this modality accounting for 45% (974 out of 2186) of the total number of information units.

Further analyses of the amount of information contained in gestures and speech (i.e., including all speech) for the individual semantic categories revealed the same pattern of results as above. Specifically, information about location and size was represented significantly more in gesture, while information about intensity, effects, duration, cause, and awareness was represented significantly more in speech (all p values < .001, see Appendix 2 for descriptive statistics and exact significance values). Again, there were no significant differences for quality information contained in gestures and speech.

Distribution of Semantic Information across Speech Only, Gesture Only and Both Gesture and Speech Together for Gesture-speech Compounds

As the preceding section indicates, there appear to be distinct patterns in the distribution of information across gesture and the accompanying speech during pain-focused talk. However, because the above analysis is based on the total number of information units in each modality and permits semantic features to be scored for both speech and gesture simultaneously, it does not consider the semantic interplay of gesture and speech regarding each individual ideational unit. Therefore, in order to conduct a more fine-grained analysis, the data from the gesture-speech ensembles were categorized according to whether information was contained in gesture only, speech only, or gesture and speech together. For example, if a participant said “it’s a strong hammering pain,” and performed a gesture in which the hand moved backwards and forwards next to the head, information about location (head) would be contained in gesture only, information about intensity (strong) would be contained in speech only and information about quality (hammering) would be contained in both gesture and speech together. Table 3 shows how the two modalities interact with regard to each of the semantic features.
Table 3

Descriptive statistics, percentages and results of statistical comparisons of the distribution of units of information represented in ‘gesture only’, ‘speech only’ and ‘gesture and speech together’ for each of the semantic features

Semantic features

Distribution of information units

Median (range), [%]a

Wilcoxon tests

Gesture

Speech

Gesture and speech

Significant comparisons

z

N-ties

p value (2-tailed)

Location

12.0 (47.00)

0.0 (3.00)

6.0 (15.00)

G > S

3.63

15

.001

[66%]

[3%]

[31%]

G > GS

3.62

17

.001

   

GS > S

3.73

18

.001

Quality

2.0 (6.00)

1.0 (7.00)

4.5 (9.00)

GS > G

3.28

15

.001

[25%]

[20%]

[55%]

GS > S

2.88

16

.004

Intensity

0.0 (4.00)

2.0 (7.00)

0.0 (6.00)

S > G

3.43

15

.001

[8%]

[68%]

[24%]

    

Size

8.5 (21.00)

0.0 (1.00)

0.0 (4.00)

G > S

3.73

18

.001

[92%]

[1%]

[7%]

G > GS

2.73

18

.006

Effects

0.0 (4.00)

2.0 (13.00)

3.0 (8.00)

GS > G

2.94

11

.003

[9%]

[45%]

[46%]

S > G

3.04

15

.002

Duration

1.0 (3.00)

3.0 (6.00)

4.0 (10.00)

GS > G

3.37

17

.001

[10%]

[37%]

[52%]

S > G

3.28

16

.001

Cause

0.0 (6.00)

2.0 (10.00)

0.5 (14.00)

GS > G

2.67

9

.008

[9%]

[46%]

[45%]

S > G

3.08

12

.002

Awareness

0.0 (3.00)

2.5 (14.00)

1.0 (6.00)

GS > G

2.85

14

.004

[8%]

[56%]

[36%]

S > G

3.02

13

.003

Total

27.0 (68.00)

15.5 (41.00)

25.0 (41.00)

G > S

3.20

18

.001

   

GS > S

3.25

18

.001

aPercentages reflect the amount of information represented in each modality for each semantic feature, i.e., for location, 66% of information about this feature was contained in gestures only, 3% in speech only and 31% in gestures and speech together

A Friedman’s test showed that when collapsing across the eight semantic categories, there was a significant difference in the amount of semantic information conveyed through gesture only, speech only, and gesture and speech together, χ2(2) = 14.39 p = .001. As shown in Table 3, significantly more units of information were contained in gesture only (41%; 519 units), and in gesture and speech together (36%; 455 units), than in speech alone (23%; 295 units). There was no significant difference between the amount of semantic information conveyed through gesture only and both gesture and speech together.

We then conducted the same comparisons for each of the individual semantic categories. Friedman’s tests indicated that there were significant differences in the amount of information represented in gesture only, speech only, and gesture and speech together for all eight semantic features (all comparisons significant at p < . 001). As indicated in Table 3, follow-up Wilcoxon tests revealed that in line with the preceding analysis, size and location were represented significantly more in gesture only than in speech only or speech and gesture together, with location information also represented significantly more in gesture and speech together than in speech only. Conversely, and again in line with the preceding analysis, information pertaining to the cause, effects, duration, and awareness of the pain was contained significantly more frequently in speech only than in gesture only. Information about cause, effects, duration, and awareness (but not intensity) was also represented significantly more in gesture and speech together than in gesture only. Information about the quality of the pain was represented significantly more in gesture and speech together than in gesture or speech only. No other comparisons were significant.

Discussion

The present study investigated co-speech gesture use during pain communication. The results revealed that participants frequently produce co-speech gestures during pain communication, with representational gestures accounting for 43% of all the co-speech gestures in the present corpus. Representational gestures contained significantly more units of information than the accompanying speech and even when all speech units were included in the analysis (including those units of speech that were accompanied by non-representational gestures or not accompanied by a gesture at all), representational gestures still accounted for 45% of the units of information conveyed by participants about their pain experience. A more detailed look at the data revealed that at 41 and 36% respectively, a considerable amount of information about pain was represented in gesture only or speech and gesture together (with speech accounting for only 23% of the information represented). Taken together this highlights the important communicative contribution of co-speech gestures in the context of pain-focused talk.

In light of the evidence that doctors may not visually attend to their patients for substantial portions of the consultation (e.g., Hartzband and Groopman 2008; Heath 1984; Makoul et al. 2001; Margalit et al. 2006; McGrath et al. 2007; Ruusuvuori 2001) these findings are of particular importance if doctors are to understand the pain experience and provide appropriate treatment and support. Encouraging doctors to orient towards patients during descriptions of pain (and also more generally throughout the medical consultation) may also have positive implications in terms of demonstrating attention to the patient and their concerns and allowing for the uptake of information from the whole range of nonverbal cues (such as cues to emotion, lack of understanding, and desire for more information; Bensing et al. 1995; DiMatteo and Hays 1980; DiMatteo et al. 1980, 1986; Hall et al. 1995; Roter et al. 2006; Roter and Hall 2006). Finally, the importance of recognizing the communicative contribution of co-speech gestures in this context may be even more pronounced in the assessment of pain in children, non-fluent English speakers, and people with language impairments as within these populations the problems inherent in the verbal communication of pain are exacerbated due to limited English vocabulary. For example, anecdotal evidence suggests that when non-native speakers try to communicate about pain-related sensations both the patient and doctor use gestures to negotiate a joint understanding of the sensation due to problems in finding the right verbal expression; highlighting an important avenue for future work.

The results revealed that significantly more information about pain location and size was represented in gestures than in speech, suggesting this information may be more amenable to gestural communication in the visible external space. This is in line with numerous studies that have demonstrated that visual and spatial information, especially information about size and relative position, is primarily contained in gestures and is more accurately conveyed through this modality (Bavelas et al. 2002; Beattie and Shovelton 1999a, b; Emmorey and Casey 2001; Graham and Argyle 1975; Holler and Beattie 2002, 2003a; Holler et al. 2009). The present findings build particularly on previous studies that have used cartoon stories as stimuli (Alibali et al. 2001; Beattie and Shovelton 1999a, b, 2001; Holler and Beattie 2002, 2003a, b; Holler and Wilkin 2009; Jacobs and Garnham 2007; McNeill 1992) by demonstrating that within the applied context of pain communication, participants naturally and spontaneously use co-speech gestures that contribute a significant amount of information to the overall message. This shows that gestures are core to communication even when we talk about matters more relevant to everyday life.

Within the social context of pain communication, the production of a large proportion of gestures referring to pain location (and size) may be the result of attempts to establish and maintain a joint focus of attention, or a shared referent for the dialogue. In particular, sufferers may attempt to make pain ‘visible’ through gestures performed around the pain site to substitute for the absence of a visible sign of pain (such as a wound) on which to focus addressees’ attention. Although further research is needed to establish whether gestures do indeed perform this function, previous research has indicated that pointing gestures often serve the pragmatic function of establishing a joint focus of attention during dialogue (Bangerter 2004; Kelly et al. 1999; Louwerse and Bangerter 2005). Further, Heath (2002) described instances in which the performance of representational gestures around the painful area succeeded in orienting the doctor’s attention to the pain site.

The semantic feature analysis revealed that just over 35% of information about the pain experience was encoded in speech and gesture simultaneously. In particular, substantial proportions of information about location, quality, duration, cause, effects, and awareness of pain were contained in speech and gestures together. Interestingly though, the only information to be represented significantly more in gesture and speech together than in either modality alone was about the quality of the pain, suggesting that neither modality is sufficiently able to provide a complete representation of the information. Given the difficulties associated with pain communication and the susceptibility of verbal descriptions to misinterpretation, these findings suggest that gesture and speech may combine to provide a more precise representation of this information. This is potentially in line with the findings of Holler and Beattie (2003b) that gestures can disambiguate spoken information by providing explicit visual clues to interpretation, and Gerwing and Allison (2009) who demonstrated that when the same information was contained in both modalities simultaneously gestures were more specific or precise in terms of the information they conveyed. However, a more detailed analysis is needed to establish whether the gestures here did indeed fulfill such a disambiguating function. In particular, it would be interesting to investigate the specific nature of the information represented in the two modalities with regards to pain quality as this would intuitively appear to be the most difficult aspect to communicate effectively and without ambiguity (consider the problem posed earlier with regards to distinguishing between the different ways in which pain descriptors, e.g., “sharp”, could be interpreted).

Only information about pain intensity was represented significantly more in speech alone than in gesture alone or speech and gesture together, suggesting that only for this type of information is the verbal modality alone sufficient. A possible explanation is that this aspect of pain may be relatively simple to verbalize, for example by referring to pain intensity in terms of numerical values (e.g., “on a scale of one to ten it’s about an eight”) or in relation to other (more or less painful) sensations. However, the coding scheme employed only differentiates on an explicit level whether information is represented or not. For example, gestures were only coded as containing information about pain intensity if this information was ostensibly present within the gesture (e.g., a gesture in which the hand moves across and upwards, illustrating the idea of something increasing across time). However, additional information that may be inferable through the representation of other semantic features cannot be accounted for within the present coding scheme. For example, intensity information may be represented more implicitly in the particular way in which a gesture explicitly represents quality information; consider two gestures representing a sensation of pressure through a pushing down motion, however, one that is performed more forcefully may indicate a stronger, more intense pain than one in which the hand in brought down without force. A coding scheme with a greater degree of granularity would be needed to capture these more implicit aspects in speech and gesture and will be the focus of future research.

It is important to address a number of other possible limitations within our study. Firstly, the present methodology required participants to describe a pain they had recently experienced (within the 2 weeks prior to participation) but were not necessarily experiencing at the time of the study. Although this retrospective description may have resulted in the pain description being less accurate, it does reflect the way in which pain communication occurs within medical settings as patients often have to discuss pain they are not experiencing at the time of the consultation. Further, studies have indicated that memory for pain is largely reliable, with the accuracy of descriptions of past pain only influenced by present pain (Erskine et al. 1990; Salovey et al. 1993); not considered a problem here as participants were asked to describe their most recent pain.

A criticism of the semantic feature approach is that it exclusively considers the depiction of semantic information in the speech and gesture modalities. What it is not designed to capture are the multiple additional dimensions of communication, such as pragmatics, interpersonal rapport, or the interplay of speech and gestures with other aspects of nonverbal communication. For example, additional information about pain, such as intensity, may be conveyed through facial expression and vocal tone as well as gestures, thus providing additional information about the pain experience that would not be captured within the present analysis. However, initiating an investigation of the role of gestures in pain-focused interactions requires us to break down the complex process of human face-to-face interaction into individual facets before arriving at a more integrated view. The semantic feature approach represents one such way to do this by allowing us to systematically investigate and quantify the role of gesture in the communication of information about pain. Previous work within clinical settings has tended to group co-speech hand gestures with other nonverbal behaviors such as eye gaze, body orientation, and paralinguistic speech properties and consider their contribution in terms of the communication of emotion, distress, or underlying traits and psychological symptoms of participants, or as modifiers of the verbal message (e.g., DiMatteo et al. 1980, 1986; Freedman 1972; Hall et al. 1995; Mahl 1968; Roter et al. 2006; Shreve et al. 1988) Thus, the present work significantly extends these findings by indicating that the particular nonverbal modality referred to here as ‘representational co-speech gestures’ should be attended to on the basis of their contribution to the communication of semantic information about the pain experience. Further research will consider the additional ways in which gestures function within interactions about physical pain and the interplay of gestures and speech with other aspects of nonverbal communication (such as eye gaze, facial expression and vocal tone).

It is also important to consider the fact that participants discussed their pain with a researcher who could not provide treatment. Despite this, the researcher had, to some extent, the same intentions as a medical practitioner in that she aimed to obtain as much information as possible about a pain experience to which she had no direct access. Further, the interview guide was based on questions used within clinical interviews about pain and the researcher behaved naturally during the interview. Given that the study was exploratory and did not test directed hypotheses, the potential for experimental confounds was minimal. Moreover, the finding that participants frequently used gestures to communicate information when they were aware that they would not receive treatment may suggest that patients who are motivated to communicate their pain as fully and accurately as possible in order to receive treatment may draw on the gestural modality to an even greater extent.

Finally, it is important to note that the participants were all females interacting with a female researcher, as research has revealed important gender differences in the way dyads interact, the most notable here being that there is greater information sharing in female-female dyads (Dindia and Allen 1992; Hall et al. 1994; Roter and Hall 2006; Roter et al. 2002). Although these limitations need to be considered when making generalizations based on the present findings, this study represents an important step forward and lays the groundwork for future research in actual clinical contexts.

The present results open up a number of opportunities for further research. As indicated earlier, a more in-depth investigation of the way in which information is represented in speech and gesture when both modalities simultaneously represent information about the same aspect of pain (e.g., quality) is needed to further understand the function of gestures here. Secondly, given that the present study has demonstrated that information is represented in gestures during pain communication, the next stage is to investigate whether this information is indeed crucial to recipients’ understanding of the pain experience. Finally, an investigation of the use of non-representational gestures during pain communication seems necessary as they accounted for around 57% of the gestures in the present corpus. This suggests they may serve an important function within pain communication. In particular, given the interpersonal functions of a subset of nonrepresentational gestures known as interactive gestures (Bavelas et al. 1992, 1995) and the important interpersonal functions of nonverbal cues within the medical consultation (Roter et al. 2006), it would be interesting to test for any associations between patients’ use of interactive gestures and the social involvement and empathy expressed by doctors.

In conclusion, the results of present research demonstrate that during pain-focused interactions participants consistently produce gestures that contain a significant proportion of the information that participants communicate about their pain experience; a considerable amount of which was not contained in speech at all. This provides clear support for the claim that “utterances possess two sides, only one of which is speech […] to exclude the gesture side, as has been traditional, is tantamount to ignoring half of the message out of the brain” (McNeill 2000, p. 139), and suggests that gestures are a valuable source of information during pain communication and may be crucial to our understanding of others’ pain experiences. Finally, the present results give rise to a number of important avenues for further research, which it is hoped will further illuminate the role of co-speech gestures in communication and ultimately lead to more effective communication of pain.

Footnotes
1

Inspection of results of non-native English participants revealed that their gesture rate, speaking time, and overall gesture production were similar to and within the range of those of the native English speakers. Further, with the exception of information about cause (for which the exclusion of these participants eliminated differences between gesture only and speech only or speech and gesture together), the findings remained the same without these participants.

 

Acknowledgments

We would like to thank Judith Hall, Lorenza Mondada, Mandana Seyfeddinipur, Adam Kendon, and two anonymous reviewers for their helpful comments on earlier versions of this manuscript. We would also like to thank the participants for taking part in this study and Rebecca Cleary for her help with data analysis.

Copyright information

© Springer Science+Business Media, LLC 2011