1 Introduction

People use several information sources to perceive and interpret emotions. Visual information, such as facial expressions, is most informative, but auditory, prosodic cues in the speech signal also provide important cues for emotion perception. For instance, prosodic cues may alter the meaning of a spoken message, as in the case of irony: the meaning of an utterance like “I like roses” can be interpreted as positive (I do like roses) or negative (I do not like roses), depending on the applied prosody. Prosodic cues, then, are acoustic parameters in speech, such as pitch, intensity, and tempo, from which a normal-hearing listener may perceive emotion in the speech signal (Banse and Scherer 1996; Scherer 2003; Coutinho and Dibben 2013). In an ideal communicative setting both visual and auditory information is available. Everyday communication settings, however, may frequently deprive the listener of visual information, (e.g., during a telephone conversation) so that listeners have to rely on auditory information only.

As hearing loss impairs the perception of auditory information perception of prosodic information may also suffer. Although hearing aids clearly improve speech intelligibility, it is unclear to what extent hearing aids sufficiently restore information needed for emotion perception in speech. Several studies with severely hearing-impaired children and adolescents indicate that aided hearing-impaired listeners perform poorly compared to their normal-hearing peers when rating affective prosody in speech (Most et al. 1993; Most and Michaelis 2012). Moreover, they found that affect perception in hearing-impaired participants was independent of their individual hearing loss. These findings, however, cannot be directly transferred to older hearing aid wearing adults, as younger and older adults differ in the perception of affective prosody, even if both groups have normal hearing (e.g., Paulmann et al. 2008). Moreover, older adults were normal-hearing when they acquired language, and will have learned to interpret the acoustic cues associated with affect, in contrast to hearing-impaired children, who never have had a normal development of hearing and perception. Finally, the two age groups may differ in the type of hearing loss, which also complicates the comparison.

To our knowledge, in older adults only the effect of mild hearing loss has been investigated so far. Findings concerning the link between individual hearing loss and affect perception have been inconsistent. Orbelo and colleagues (Orbelo et al. 2005) found no effect of hearing sensitivity on affect perception, while Rigo and Lieberman (Rigo and Lieberman 1989) found that low-frequency hearing loss (PTAlow (0.25, 0.5, 1 kHz) > 25 dB HL) impacted affect perception. Note that both these studies used acted speech. The lack of a global effect of hearing sensitivity on affect perception in these experiments could be due to the more prototypical prosodic expression of affect in acted compared to natural speech (Scherer 1986; Wilting et al. 2006). More extreme expressions of affect may be relatively easy to perceive, even for people with hearing loss (Grant 1987) thus obscuring a possible influence of hearing sensitivity on affect perception in natural communicative settings.

The current study investigates whether hearing aids restore affect perception, and how hearing loss in older adults influences affect perception. In particular, this study focuses on the question to what extent hearing aid use and hearing loss influence listeners’ sensitivity to the acoustic parameters cueing affect. To that end, older (bilateral) hearing aid users are tested while wearing their hearing aid (aided condition) and without it (unaided condition). The relation between the acoustic parameters and the affect ratings are then evaluated for the two listening conditions. Moreover, the performance in the aided condition is compared to a control group of age-matched normal-hearing listeners. Participants will be tested on natural conversational speech stimuli in order to mimic realistic listening conditions.

2 Experimental Set-up

2.1 Participants

Two groups of older adults aged between 65 and 82 were tested. All participants were Swiss German native speakers and were financially compensated for their participation. The group of 23 older hearing aid users with bilaterally symmetric sensorineural hearing loss (MAge = 73.5 years, SDAge = 4.5; 17 men, 6 women) was recruited via the Phonak AG participant database. Participants have worn hearing aids bilaterally for at least 2 years. The group of 22 normal-hearing adults (MAge = 70.8 years, SDAge = 5.2; 10 men, 12 women) was recruited via the Phonak human resource department and a local senior club in Staefa, Switzerland.

Participants’ hearing ability was tested by means of pure-tone audiometry (air conduction thresholds). The mean unaided pure-tone average (PTA) across 0.5, 1, 2, and 4 kHz for the hearing-impaired group was 49.8 dB HL (SD = 8.7, range: 32.5–68.8). The normal-hearing participants had age-normal thresholds (as defined in the ISO 7029:2000 standards for this age group). Thresholds below the ISO’s maximum pure-tone average threshold (across 0.5, 1, 2, and 4 kHz) at the age of 70 for men (PTA = 33.5 dB HL) and women (PTA = 26.0 dB HL) were considered as normal hearing. Additionally, participants underwent a brief cognitive screening test to scan for mild cognitive impairment. We used the German version of the Montreal Cognitive Assessment Test (MOCA, Nasreddine et al. 2005) using a cutoff criterion of 67 % accuracy (cf. Waldron-Perrine and Axelrod 2012). The test was adjusted for hearing-impaired participants (Dupuis et al. 2015) by leaving out tasks in which auditorily presented items had to be memorized. All participants passed the test.

2.2 Task and Procedure

Affect perception was tested using the dimensional approach, in which participants indicate the level of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude), separately on a rating scale (rather than labeling emotion categories such as “angry” or “sad”).

Stimuli were short audio-only utterances from an authentic and affectively-colored German conversational speech corpus (Grimm et al. 2008). Emotion inferences from speech correlate across languages, particularly for similar languages (cf. Scherer et al. 2001). Given the close relationship between German and Swiss German, the way affect is encoded in Swiss German is not expected to differ considerably from that in German as spoken in Germany. The corpus comes with mean reference values for the degree of arousal and valence for each utterance. These reference values had been collected with a 5-step pictorial rating tool (Bradley and Lang 1994), ranging from ‑1 (calm/negative) to + 1 (aroused/positive). The same rating tool was used to collect affective ratings in the current study. From the corpus, 24 utterances were selected for the arousal task (reference value range: ‑0.66 to 0.94) and 18 were selected for the valence task (reference value range: ‑0.80 to 0.77). All stimuli in our experiment were neutral regarding the content of what was said (e.g. ‘Was hast du getan?’ ‘What have you done?’) to minimize semantic interference, were shorter than 3 s and were produced by multiple speakers. From these two stimuli sets two randomized lists were created differing in the order in which the stimuli were presented for each emotion dimension.

Participants were comfortably seated in a sound-treated room and were tested in the free field. The pictorial rating tool was displayed on a computer screen and stimuli were presented via a single loudspeaker which was placed at head level in front of the participant (0° azimuth) at a distance of 1 m. Participants received written and oral instructions and performed four practice trials before proceeding to the test stimuli of either rating task. Both rating tasks were completed at the participant’s own pace. Utterances were rated one at a time and could be replayed if needed.

All participants performed the rating tasks in two conditions. For the hearing aid users, these two conditions were with (aided) and without their hearing aids (unaided). The normal-hearing participants completed the tasks in a normal listening condition and in a condition with simulated hearing loss (data of the latter condition are not reported here). In each listening condition, participants rated all stimulus utterances, so each participant rated each utterance twice. The order of the arousal/ratings rating tasks and listening conditions were counterbalanced across participants. Two different lists were used to present listeners with a different order of the stimuli in the two listening conditions. There was a short break between each of the four blocks (i.e., between the two listening conditions and between the two rating tasks).

2.3 Acoustic Parameters

Affect ratings provided by the participants in our study were related to four acoustic parameters which are traditionally related to affective prosody: mean F0 (e.g., Hammerschmidt and Jürgens 2007), mean intensity (e.g., Aubergé and Cathiard 2003), global temporal aspects (Mozziconacci and Hermes 2000), and spectral measures, which are related to vocal effort (e.g., Tamarit et al. 2008). In the current study, mean F0 and mean intensity were calculated for each utterance by averaging over the utterance using Praat (Boersma and Weenink 2013). As a measure of tempo, articulation rate was calculated by dividing the number of syllables in the canonical transcription of the utterance by the file length, excluding pauses longer than 100 ms. Spectral slope is reflected in the spectral information described by the Hammarberg Index (Hammarberg et al. 1980), which is defined as the intensity difference between the maximum intensity in a lower frequency band [0–2000 Hz] versus that in a higher frequency band [2000–5000 Hz]. In this study, the Hammarberg Index energy distribution measure was averaged across the entire utterance.

3 Results

The data were analyzed using R statistical software (R Development Core Team 2008). To investigate (a) whether hearing loss severity modulates affect ratings and (b) whether wearing a hearing aid makes listeners more sensitive to subtle differences in acoustic parameters, we compared affect ratings (the dependent variable) of the hearing-impaired listeners in the aided and unaided conditions using linear mixed-effects regression analyses with random intercepts for stimulus and participant. The initial models (one for arousal and one for valence) allowed for three-way interactions between listening condition (aided, unaided), individual hearing loss, and each of the acoustic parameters (mean F0, mean intensity, articulation rate, Hammarberg Index). Interactions and predictors that did not improve model fit (according to the Akaike Information Criterion) were removed using a stepwise exclusion procedure. Interactions were removed before simple effects, and those with the highest non-significant p-values were excluded first.

To investigate whether the use of a hearing aid restores affect perception to the level of normal-hearing older adults, we compared hearing aid users’ performance in the aided condition to that of the normal-hearing listeners. The method and model-stripping procedure were identical to that of the first analysis. The initial models (for arousal and valence, respectively) allowed for two-way interactions between group (hearing aid users aided, normal hearing) and each of the four acoustic parameters.

3.1 Aided Versus Unaided Listening

For arousal, mean intensity was found to be a strong cue for arousal rating (β= 6.606 × 10‑2, SE = 1.528 × 10‑2, p < 0.001): higher intensity was associated with higher ratings of arousal in the aided and unaided conditions. Moreover, arousal ratings were generally higher in the aided condition than in the unaided condition (mapped on the intercept) (β= 7.156 × 10‑2, SE = 2.089 × 10‑2, p < 0.001). Significant interactions between listening condition and articulation rate (β = 3.012 × 10‑2, SE = 1.421 × 10‑2, p < 0.05) and listening condition and vocal effort (β = 1.459 × 10‑2, SE = 3.949 × 10‑3, p < 0.001) were observed: while vocal effort and articulation rate did not influence ratings in the unaided condition, their effects were larger in the aided condition. In the unaided condition, those with poorer hearing had lower ratings (β = ‑9.772 × 10‑3, SE = 4.093 × 10‑3, p < 0.05) than those with better hearing, but this was less the case in the aided condition (β = 7.063 × 10‑3, SE = 2.459 × 10‑3, p < 0.01). This suggests that wearing the hearing aid made the rating patterns of poorer and better-hearing participants more alike. Furthermore, those with poorer hearing associated increases in F0 (β = 6.093 × 10‑5, SE = 2.094 × 10‑5, p < 0.01) and in articulation rate (β = 1.833 × 10‑3, SE = 8.952 × 10‑4, p < 0.05) more with higher arousal than those with better hearing across listening conditions. This suggests that, among the hearing aid users, those with poorer hearing used additional prosodic cues compared to those with relatively good hearing.

For valence, a significant simple effect of mean F0 (β  = ‑4.813 × 10‑3, SE = 8.856 × 10‑4, p < 0.001) was found: higher pitch was associated with lower ratings, i.e., more negative ratings. None of the other acoustic parameters was predictive of the valence ratings. Moreover, importantly, no effects for listening condition and hearing loss were observed: valence ratings were independent of whether the participants wore their hearing aids or not and were independent of individual hearing loss.

3.2 Aided Listening Versus Normal-Hearing Controls

Similar to the previous arousal analysis, a significant simple effect of mean intensity (β = 0.071, SE = 0.014, p = 0.001) was found: higher mean intensity was associated with higher arousal ratings. Although ratings of the hearing aid users did not differ significantly from the normal-hearing participants (mapped onto the intercept, β = ‑0.030, SE = 0.053, p = 0.57), use of mean intensity differed between the two listener groups: hearing aid users responded more strongly to differences in intensity than participants with age-normal hearing (β = 0.009, SE = 0.004, p < 0.05).

For valence, similar to the previous analysis, mean F0 was associated with lower valence ratings (β = ‑4.602 × 10‑3, SE = 1.168 × 10‑3, p < 0.01). No other acoustic parameters were predictive of the valence ratings. There was no effect of group, nor any interactions between group and the acoustic parameters.

4 Discussion

This study aimed to investigate whether the use of a hearing aid restores affect perception to the level of older adults with age-normal hearing. More specifically, our study investigated to what extent hearing aids and individual hearing loss modify sensitivity to the acoustic parameters cueing affect in older hearing aid users.

The study showed that the hearing aid restored affect perception in the sense that the use of the hearing aid makes rating patterns of hearing aid users with severe hearing loss more similar to those with less severe hearing loss. Secondly, the study showed that the use of a hearing aid changed the pattern of acoustic parameters that were used for arousal perception. Importantly, across the aided and unaided conditions, hearing loss modulated the extent to which listeners used alternative cues to interpret arousal (i.e., other cues than intensity): hearing-impaired listeners with more severe degrees of hearing loss made more use of articulation rate and mean F0. In other words, gradually acquired hearing loss causes listeners to rely on different cues for their interpretation of arousal, but restoring their hearing by means of a hearing aid will also change which cues they rely on for their interpretation of arousal. Older adults may only start using additional cues (such as articulation rate) for their interpretation of arousal with more severe hearing loss. In a related study (Schmidt et al., submitted), older adults with mild hearing loss were tested who were not wearing hearing aids. For this group with mild hearing impairment, intensity emerged as the only significant predictor of arousal. Note, however, that this reliance on multiple cues rather than on a single cue does not hold for valence, where F0 is the only prosodic cue listeners irrespective of their hearing sensitivity are using.

Hearing aid users wearing their hearing aid generally showed the same pattern of affect ratings as participants with age-normal hearing, especially for the valence dimension. However, for arousal ratings, those wearing a hearing aid were actually more sensitive to intensity differences than participants in the reference group. This may be because hearing in the reference group was normal for their age, but still implied elevated high-frequency thresholds. Consequently, older adults in the reference group were less sensitive, at least to some acoustic differences, than the hearing aid users.

In sum, the current study shows that older hearing aid users do not generally differ from their normal-hearing peers in their perception of arousal and valence, which underlines the importance of hearing aids in the rehabilitation of affect perception. While the perception of valence seems to be independent of listening condition and individual hearing loss, wearing hearing aids matters for the interpretation of rating prosodic information related to arousal. Due to this difference between emotion dimensions, future studies on affect perception in hearing aid users should treat perception of arousal and valence separately.