Individual Differences in Behavioural Decision Weights Related to Irregularities in Cochlear Mechanics
An unexpected finding of previous psychophysical studies is that listeners show highly replicable, individualistic patterns of decision weights on frequencies affecting their performance in spectral discrimination tasks—what has been referred to as individual listening styles. We, like many other researchers, have attributed these listening styles to peculiarities in how listeners attend to sounds, but we now believe they partially reflect irregularities in cochlear micromechanics modifying what listeners hear. The most striking evidence for cochlear irregularities is the presence of low-level spontaneous otoacoustic emissions (SOAEs) measured in the ear canal and the systematic variation in stimulus frequency otoacoustic emissions (SFOAEs), both of which result from back-propagation of waves in the cochlea. SOAEs and SFOAEs vary greatly across individual ears and have been shown to affect behavioural thresholds, behavioural frequency selectivity and judged loudness for tones. The present paper reports pilot data providing evidence that SOAEs and SFOAEs are also predictive of the relative decision weight listeners give to a pair of tones in a level discrimination task. In one condition the frequency of one tone was selected to be near that of an SOAE and the frequency of the other was selected to be in a frequency region for which there was no detectable SOAE. In a second condition the frequency of one tone was selected to correspond to an SFOAE maximum, the frequency of the other tone, an SFOAE minimum. In both conditions a statistically significant correlation was found between the average relative decision weight on the two tones and the difference in OAE levels.
KeywordsBehavioural decision weights Level discrimination Spontaneous otoacoustic emissions Stimulus frequency otoacoustic emission
People with normal hearing acuity usually can follow a conversation with their friends at a noisy party, a phenomenon known as the “cocktail party effect” (Cherry 1953). This remarkable ability to attend to target sounds in background noise deteriorates with age and hearing loss. Yet, people who have been diagnosed in the clinic as having a very mild hearing loss or even normal hearing based on their pure tone audiogram (the clinical gold standard for identifying hearing loss) still often report considerable difficulty communicating with others in such noisy environments (King and Stephens 1992). The conventional pure tone audiogram, the cornerstone of hearing loss diagnosis, is not always the best predictor for these kinds of difficulties.
Perturbation analysis has become a popular approach in psychoacoustic research to measure how listeners hear out a target sound in background noise (cf. Berg 1990; Lutfi 1995; Richards 2002). Studies using this paradigm show listeners to have highly replicable, individualistic patterns of decision weights on frequencies affecting their ability to hear out specific targets in noise—what has been referred to as individual listening styles (Doherty and Lutfi 1996; Lutfi and Liu 2007; Jesteadt et al. 2014; Alexander and Lutfi 2004). Unfortunately this paradigm is extremely time-consuming, rendering it ineffective for clinical use. Finding a quick and an objective way to measure effective listening in noisy environments would provide a dramatic improvement in clinical assessments, potentially resulting in better diagnosis and treatment.
In the clinic, otoacoustic emissions (OAEs) provide a fast, noninvasive means to assess auditory function. Otoacoustic emissions (OAEs) are faint sounds that travel from the cochlea back through the middle ear and are measured in the external auditory canal. Since their discovery in the late 1970s by David Kemp (1978), they have been used in the clinic to evaluate the health of outer hair cells (OHC) and in research to gain scientific insight into cochlear mechanics. Behaviorally they have been shown to predict the pattern of pure-tone quiet thresholds (Long and Tubis 1988; Lee and Long 2012; Dewey and Dhar 2014), auditory frequency selectivity, (Baiduc et al. 2014), and loudness perception (Mauermann et al. 2004).
The effect of threshold microstructure (as measured by OAEs) on loudness perception is particularly noteworthy because relative loudness is also known to be one of the most important factors affecting the decision weights listeners place on different information-bearing components of sounds (Berg 1990; Lutfi and Jesteadt 2006; Epstein and Silva 2009; Thorson 2012; Rasetshwane et al. 2013). This suggests that OAEs might be used to diagnose difficulty in target-in-noise listening tasks through their impact on decision weights. OAEs may be evoked by external sound stimulation (EOAEs) or may occur spontaneously (SOAEs). Stimulus frequency OAEs (SFOAEs), which are evoked using a single frequency sound, are one of the most diagnostic OAEs regarding cochlear function. They show a highly replicable individualistic pattern of amplitude maxima and minima when measured with high enough frequency resolution, a pattern called SFOAE fine structure. The level difference between maxima and minima can be as large as 30 dB. Usually SOAEs occur near the maxima of SFOAEs fine structure (Bergevin et al. 2012; Dewey and Dhar 2014). Given that loudness varies with SFOAE maxima and minima and that loudness is a strong predictor of listener decision weights, it is possible that both SFOAEs and SOAEs may be used to predict individual differences in behavioural decision weights.
Data are presented from seven individuals (mean age: 27.42 yrs) with pure tone air-conduction hearing thresholds better than 15 dB HL at all frequencies between 0.5 and 4 kHz, normal tympanograms, and no history of middle ear disease or surgery.
2.2 Measurement and Analysis of Otoacoustic Emissions
SOAEs were evaluated from 3-min recordings of sound in the ear canal obtained after subjects were seated comfortably for 15 min in a double-walled, Industrial Acoustics, sound-attenuation chamber. The signal from the ER10B + microphone was amplified by an Etymotic preamplifier with 20 dB gain before being digitized by a Fireface UC (16 bit, 44100 samples/sec). The signal was then segmented into 1-sec analysis windows (1-Hz frequency resolution) with a step size of 250 ms. Half of the segments with the highest power were discarded in order to reduce the impact of subject generated noise. Then an estimate of the spectrum in the ear canal was obtained by converting the averages of FFT magnitude in each frequency bin to dB SPL. SOAE frequencies were identified as a very narrow peak of energy at least 3 dB above an average of the background level of adjacent frequencies.
2.3 Behavioural Task: Two-Tone Level Discrimination
A two-interval, forced-choice procedure was used: two-tone complexes were presented in two intervals, standard and target, on each trial. All stimuli were presented monaurally at a duration of 300 (in SOAEs experiment) or 400 ms (SFOAEs experiment) with cosine-squared, 5-ms rise/fall ramps. In the target interval, the level of each tone was always 3 dB greater than that in the standard interval. Small independent and random perturbations in level of the individual tones were presented from one presentation to the next. The level perturbations were normally distributed with sigma (σ) = 3 dB. The order of standard and target intervals was selected at random on each trial. Listeners were asked to choose the interval in which the target (higher level) sound occurred by pressing a mouse button. Correct feedback was given immediately after the listener’s response. Decision weights on the tones for each listener were then estimated from logistic regression coefficients in a general linear model for which the perturbations were predictor variables for the listener’s trial-by-trial response (Berg 1990). In experiment 1, the frequencies of the two-tone complex were chosen from SOAE measures from each listener: one at the frequency of an SOAE and the other at a nonSOAE frequency, either lower or higher than the chosen SOAE frequency. The level of each tone in the standard interval was 50 dB SPL. SOAEs usually occur near maxima of SFOAEs fine structure (Bergevin et al. 2012; Dewey and Dhar 2014), but SOAEs are not always detectable at such maxima. Thus, we decided also to measure SFOAE fine structure and select frequencies at the maxima and minima of the fine structure for the behavioural level discrimination task. In experiment 2, two frequencies were chosen from the measured SFOAE fine structure for each listener: one at a maximum of the fine structure and the other at a minimum. The level of the standard stimuli was 35 dB SPL. During a testing session, SOAEs and SFOAEs were recorded prior to and after the behavioural task.
The SFOAEs levels obtained at the beginning of the session are associated with the decision weights from the first half of the behavioural trials (filled symbols), and those at the end of session are associated with the decision weights from the second half of the behavioural trials (open symbols). The correlation between the relative decision weight and the level difference between tones near fine-structure maxima and tones near minima is statistically significant (r2 = 0.48, p = 0.000014). This outcome suggests that the association between decision weights and OAEs does not depend on detection of SOAEs.
Given that loudness varies with SFOAE maxima and minima and that loudness is a strong predictor of listener decision weights, we hypothesized that both SFOAEs and SOAEs may be used to predict individual differences in decision weights in a level discrimination task. As expected, the data showed a significant positive correlation between the level difference of the OAE at the SOAE and non-SOAE frequency and the relative decision weights on level discrimination task of two-tone complex (see Fig. 2). Also there was a similar positive correlation with the difference between SFOAE maxima and minima (see Fig. 3). The results suggest that OAE levels might be used to predict individual differences in more complex target-in-noise listening tasks, even possibly in the diagnosis of speech understanding in specific noise backgrounds. For clinical applications swept frequency SFOAEs might provide a better measure of cochlear fine structure inasmuch as they are less time consuming than SOAE measurements and provide a clearer indication of regions of threshold microstructure and variations in loudness.
This research was supported by NIDCD grant R01 DC001262-21. Authors thank Simon Henin and Joshua Hajicek for providing MATLAB code for the LSF analysis.
- Dewey JB, Dhar S (2014). Comparing behavioral and otoacoustic emission fine structures. 7th Forum Acusticum, Krakow, PolandGoogle Scholar
- Kleiner M, Brainard D, Pelli D (2007). What’s new in Psychtoolbox-3. Perception 36:14Google Scholar
- Long GR, Talmadge CL, Jeung C (2008). New Procedure for evaluating SFOAEs without suppression or vector subtraction. Assoc Res Otolaryngol 31Google Scholar
- Naghibolhosseini M, Hajicek J, Henin S, Long GR (2014). Discrete and swept-frequency sfoae with and without suppressor tones. Assoc Res Otolaryngol 37(73)Google Scholar
<SimplePara><Emphasis Type="Bold">Open Access</Emphasis> This chapter is distributed under the terms of the Creative Commons Attribution-Noncommercial 2.5 License (http://creativecommons.org/licenses/by-nc/2.5/) which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.</SimplePara> <SimplePara>The images or other third party material in this chapter are included in the work's Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work's Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material.</SimplePara>