The Source-Path-Receiver Model (SPRM) is a fundamental concept derived from hazard (including noise) control. It is useful in studies of animal bioacoustics where the sound sources may be animals, humans, or natural events within the habitat and the receivers are animals. It provides a framework for the researcher to ensure all aspects of the scenario being observed or recorded are considered, which could affect the observations. This chapter develops the SPRM for the example of animal acoustic communication, where the source and receiver are animals of the same species. Factors that affect the source and receiver are explained (e.g., age, sex, individual identity, and context). Much emphasis is given to “the path.” The environment through which the sound travels affects the received signal (in terms of its amplitude, frequency, and duration) and exhibits ambient noise, which might affect both sender and receiver. The basic concepts of sound propagation are explained (including Huygens’ principle, ray tracing, Snell’s law, reflection, scattering, reverberation, diffraction, refraction, transmission, absorption, ground effect, atmosphere effects, acoustic mirages, and shadow zones). The SPRM illustrates the importance of exploring the acoustic features of a sound signal at all points between the sender and receiver to understand factors that could promote or inhibit effective communication among animals.
Jeanette A. Thomas (deceased) contributed to this chapter while at the Department of Biological Sciences, Western Illinois University-Quad Cities, Moline, IL, USA
The source-path-receiver model (SPRM) provides a common framework for occupational health and safety management. It is used for hazard control to minimize the risk of exposing workers to hazards. Such hazards may be chemicals (e.g., spilled compounds in a pharmaceutical laboratory), material (e.g., falling bricks on a construction site), or noise.
An example SPRM for chemical hazards is shown in Fig. 5.1a. The source is a poisonous chemical, which leaks through the air inside a laboratory, and the receiver is a pharmaceutical worker. The SPRM guides the health and safety manager in minimizing the risk of exposure.Footnote 1 Ideally, the source would be eliminated, but this might not be possible if this type of chemical is required. Maybe it can be substituted by a less volatile or toxic chemical? There may be engineering controls such as installing an isolation chamber (or glove box) or exhaust hood. Engineering controls may also be applied to the path along which the chemical travels: installing ventilators, absorbing material, or mechanical barriers, or simply extending the length of the path to increase dilution. Finally, controls may be applied at the receiver: proper training for safe handling of the chemical, limiting work hours, rotating shifts, and wearing personal protective equipment (PPE). In terms of reducing the risk of exposure, the measures rank from most to least effective (termed hierarchy of control): elimination, engineering controls, procedural controls, and finally, PPE.
The SPRM applied to noise control helps break down the components of noise exposure that can be modified to reduce the risk of acoustic impacts. In the example of Fig. 5.1b, the source is a busy downtown road. Noise from the cars travels to surrounding residential buildings.Footnote 2 The source may be eliminated by relocating all traffic to an inner-city bypass and banning all traffic downtown. Maybe private car traffic can be substituted by a quieter, electric city bus service. Imposing a speed limit reduces noise. Some cities enforce noise emission standards for cars. Long-term engineering solutions may include building a tunnel, resurfacing the road with noise-absorbing material, installing noise barrier walls along the road, or erecting earth bunds. Residential buildings may have noise-reduction (double-glazed) windows and residents may set up their bedrooms at the opposite side of the building. The specific implementation of the SPRM depends on the application. For example, residents in an apartment building would not want to wear earmuffs at home, but for workers in a noisy plant, such PPE is common practice. A poster showing the steps involved in workplace noise control is shown in Fig. 5.2.
Even though the SPRM was originally developed to manage hazards at the workplace, it is much more broadly applicable to the day-to-day lives of humans—and animals. In fact, the SPRM is fundamental. Without a receiver, there is no hazard. Without a listener, there is no noise. Researchers of animal bioacoustics might want to apply the SPRM to their project in order to identify parameters of the source, path, and receiver, that might influence the results. Other chapters in this book either explicitly or implicitly apply the SPRM. Chapter 13 on the effects of noise on animals provides examples where the source is a highway, the path follows from the highway into the surrounding bush, and the receivers are birds, whose abundance might decrease closer to the source as a result of habitat degradation by noise. Chapter 11 deals with acoustic communication between animals, and so the source may be a male frog, the path may lead through a tropical rain forest, and the receivers are nearby females of the same species. Chapter 12 is about echolocation. Here, the source and the receiver are the same individual animal. A bat echolocates on a moth and the echolocation signal reflects off the moth, informing the bat how far away its prey is. The signal travels through the environment twice: from the bat to its prey and back. Chapter 10 covers audiometry, where the sources are controlled and engineered signals (often pure tones) that are played to animals over short distances or through earphones, and the receivers are individual animals whose hearing is being measured. Chapter 7 explores soundscapes on land and under water. The sources are grouped into geophony (e.g., wind, rain, and waves), biophony (i.e., animals), and anthropophony (e.g., airplanes or ships). The paths go through the air over land, under water, and through the ground. The receivers in passive acoustic monitoring of soundscapes are recorders, which collect and store acoustic data for later analysis in the laboratory. The following sections first explore the basic concepts of sound propagation in air before applying these to an example SPRM.
5.2 Sound Propagation in Terrestrial Environments
The environment through which a sound travels alters its acoustic features such as its spectral composition and level. The effects of the environment on bioacoustic signals were well explored in the classic works of Chappuis (1971), Marten and Marler (1977), Michelsen (1978), and Wiley and Richards (1978).
Airborne sound propagation (often called outdoor sound propagation) is characterized by a number of phenomena. Sounds attenuate with distance from the sender due to geometrical attenuation (i.e., spreading) and absorption by the medium. High-frequency sounds (i.e., sounds having short wavelengths; see Chap. 4 on definitions of frequency and wavelength) propagate over shorter distances than low-frequency sounds (i.e., sounds having long wavelengths). Environmental and structural factors such as substrate composition; terrain profile; obstacles along the path; amount of vegetative cover; wind speed and direction; vertical gradients (i.e., increases or decreases) in wind speed, air temperature, and humidity; air turbulence; and, to a small degree, altitude (i.e., atmospheric pressure) affect sound propagation in air (Fig. 5.3). The propagation paths, along which sounds travel, are rarely straight lines, but rather bend (i.e., refract or diffract), reflect, and scatter. The same sound traveling along different propagation paths may interfere with itself constructively or destructively. The received sound is a weaker and often distorted version of the sent sound (Wahlberg and Larsen 2017).
This section explains the basic concepts of sound propagation in air and provides some insights into environmental effects on propagation. Some environmental factors (e.g., air temperature, wind speed and direction, and humidity) vary throughout the day and among seasons, and so sound propagation can be quite variable. Sound propagation models exist and can be used to predict the distance over which sounds travel, create noise maps, estimate changes to the acoustic (e.g., spectral) features of received sounds, and identify factors that could hinder or enhance animal communication (see Lohr et al. 2003; Jensen et al. 2008). Bioacousticians should consider the characteristics of sound propagation, which could explain variability in the receiver’s behavioral response or the effectiveness of acoustic communication.
5.2.1 Ray Traces
Sound propagation is accurately described by the acoustic wave equation. This is a four-dimensional (4-d: three spatial coordinates and time) differential equation of the second order. For an “easy” derivation of the acoustic wave equation, see Larsen and Radford (2018). However, in the simplest situation of symmetric geometry (i.e., omnidirectional signal in a homogeneous medium with no reverberation), the equation can be simplified and described by one variable: the range to the source (Wahlberg and Larsen 2017). Even then, solving the wave equation under the various and variable conditions encountered in common sound propagation scenarios is quite a task. Fortunately, there are much simpler, conceptual principles of sound propagation, which can yield satisfactory results. One such concept is ray propagation or ray tracing.
Let us consider an omnidirectional source, which emits sound equally in all directions. An example is the crowing rooster in Fig. 5.4a (although it is only omnidirectional at the lower frequencies of its crow and it might not typically crow while roosting, but for the sake of science…; Larsen and Dabelsteen 1990). Wave rays point in the direction of sound propagation and are perpendicular to the wavefronts of the propagating sound. The wavefronts are spheres in 3D space (circles in 2D). Huygens’ principle (named after Christiaan Huygens, a Dutch physicist) states that every point on a wavefront can be considered a source of a new (secondary) wave. And all of the secondary wavefronts superpose to build the next (in time) primary wavefront. The wavefront at time t3 in Fig. 5.4a is also shown in Fig. 5.4b. Nine example points on this wavefront are “randomly” illustrated (as small suns). These each create their own set of concentric wavefronts, drawn at time t4. The secondary waves cancel out in some places but at the farthest range from the rooster in the center, the secondary wavefronts line up to yield the new primary wavefront at time t4.
As the expanding wavefront encounters features of the environment (e.g., vegetation or gradients in sound speed), its shape changes and the directions of the wave rays change. The laws of physics and principles of sound propagation can be applied to trace the propagation paths. This is called ray tracing. For an easy introduction to ray tracing, see Heller (2013). Wahlberg and Larsen (2017) suggested visualizing a ray as a “small acoustic particle travelling along a narrow beam or ray in discrete steps and bouncing-off or being refracted through surfaces.” This type of sound field visualization, first introduced in 1967 (Krokstad et al. 2015), has been used extensively in linear acoustics to model phenomena in outdoor sound propagation with the computational tools now available with computers (Attenborough et al. 1995).
An example of ray tracing is shown in Fig. 5.5. The omnidirectional source is located in the lower left corner, 5 m above ground at range 0, and it emits a 10-Hz tone. The wave rays are shown and follow the sound propagation paths. Sound that is initially emitted in an upwards direction bends downward at a certain altitude (depending on its initial angle of emission). This is typical for nighttime sound propagation. Once rays hit the ground, they are reflected upwards again. The sound field (i.e., the received level at every location in space) is computed by summing sound pressure over all rays. Regions where rays travel close together have high received levels (little propagation loss) and regions that only a few rays enter have low received levels (high propagation loss).
For example, Ottemöller and Evers (2008) used ray tracing to describe the sound propagation of a massive vapor cloud explosion at Buncefield fuel depot near Hemel Hempstead, UK, on the morning of 11 December 2005. The storage tank overflowed and released over 300 tons of fuel. An explosion was triggered after a vapor cloud formed and spread over a very large area (80,000 m2 or about 20 acres) before igniting. The explosion was huge, caused extensive damage, injured 43 people, and was detected by seismograph stations in the UK and the Netherlands. The data provided significant information on the ray trajectories of this explosion.
5.2.2 Geometrical Sound Spreading
Sound from an omnidirectional source in the free-field spreads out evenly in a spherical pattern (i.e., equally in all directions). The free-field is homogeneous (i.e., has no temperature or humidity gradients) and unimpeded by buildings or vegetation. At any receiver location in space, only a small proportion of the emitted sound arrives, and so the received sound is attenuated compared to the sound energy emitted at the source. The total attenuation or loss of sound energy from the source to a receiver is known as propagation loss (PL; formerly transmission loss). The sound pressure level at the source (defined as 1 m from a point source; see Chap. 4) is called the source level (SL), whereas the sound pressure level at the receiver at a distance (i.e., range r) from the source is called the received level (RL). The relation between these two levels is given by Eq. 5.1:
Propagation loss in the free-field is termed spherical spreading loss, which can be computed as PLsph = 20 log10(r) (for derivation of this expression, see Wahlberg and Larsen 2017). It is independent of signal frequency and only depends on the geometry of the source and sound field. So, Eq. 5.1 may be reformulated:
As a first approximation, spherical spreading is a good model for the propagation of terrestrial animal sounds produced in large open-air regions, such as grassland. Generally, if a bird sings on the ground up to about 10 m from a microphone, only spherical spreading needs to be considered. If the receiver is at a greater distance from the bird, then ground and atmospheric effects also must be considered. If the bird is flying overhead, then spherical spreading and atmospheric effects need to be considered when determining propagation characteristics.
If other sources of attenuation are negligible, then Eq. 5.2 can be used to calculate the source levels of a vocalizing animal located at distance r from the receiver. For instance, if a bioacoustician measured RL = 65 dB re 20 μPa at a distance of 10 m from a singing bird, then SL (at 1 m from the bird) becomes 65 dB re 20 μPa + 20 log10(10) dB re 1 m = 85 dB re 20 μPa m (e.g., Dabelsteen 1981). Similarly, if somebody played back a sound at a known source level of 85 dB re 20 μPa m, then the predicted RL at 1 km (= 103 m) range would be 25 dB re 20 μPa, as 20 log10(103) = 60.
In some environments, and for some sources (i.e., line sources rather than point sources), airborne sound propagation can be better described as cylindrical spreading. For an infinitely long line source, the propagation loss as a function of range becomes PLcyl = 10 log10(r) and so Eq. 5.1 becomes:
Most biological line sources, however, are finite, such as a row of vocalizing birds on a power line. (Please be aware that this example is not a line source in the strict acoustic sense.) This means that geometrical spreading loss is somewhere between that of spherical and cylindrical spreading loss (Fig. 5.6). When the receiver distance from the finite line source is much less than the length of the finite line source, then the attenuation is close to that of an infinite line source (i.e., 10 log10(r)), whereas at distances comparable to or larger than the length of the finite line source, the latter acts more like a point source and attenuation develops as 20 log10(r). At sufficiently long distances, all sources can be regarded as point sources.
The propagation loss, however, includes much more than geometrical spreading loss, since beyond some distance from the source, RL mostly becomes smaller with distance than predicted by Eqs. 5.2 or 5.3. To account for this extra attenuation, Marten and Marler (1977) introduced the term excess attenuation (EA). This includes a number of other effects such as atmospheric absorption, reflection and scattering, the ground effect, attenuation by vegetative cover, refraction by air temperature and wind gradients, and attenuation due to turbulence—and often there still is a rest attenuation not accounted for by these mechanisms (Wahlberg and Larsen 2017). While geometrical spreading is frequency-independent, most of the effects contributing to EA are frequency-dependent and thus alter the spectrum of the emitted sound.
In most bioacoustic scenarios, spherical attenuation applies, and Eq. 5.2 can be reformulated to:
The following sections investigate each of these components of EA.
5.2.3 Sound Absorption in Air
An important and predictable component of EA is attenuation by absorption in air. Absorption refers to the conversion of acoustic energy into heat, mostly due to molecular relaxation of air molecules and the air’s shear viscosity. Absorption loss EAabs is directly proportional to the distance r from the source:
The absorption coefficient α (measured in dB/m) is a complex function of sound frequency, air temperature, relative humidity, and (to a lesser degree) atmospheric pressure (or altitude), in addition to characteristics of oxygen and nitrogen molecules (Attenborough 2007).
For instance, a 2-kHz signal propagating at standard atmospheric pressure (1 atm) and 20 °C is attenuated by about 0.9 dB/100 m, if the relative humidity (r.h.) is 60%, but by about 4.5 dB/100 m at 10% r.h. (Fig. 5.7). Generally, sound attenuation is greater in drier air than in damp, humid air. The effect is especially important at frequencies above 2 kHz. In other words, air acts as a low-pass filter enabling only low-frequency sound to travel over long distances from the source (Attenborough 2007; Wahlberg and Larsen 2017; Larsen and Radford 2018). Consequently, bats use high source levels to overcome the attenuation in air at high frequencies when they echolocate on targets at long distances. This low-pass filter effect is especially visible in the field for broadband sound signals produced by orthopterans and other insects (Römer 1998).
Sound absorption in air varies with time of day and season, mainly due to variations in the relative humidity, which usually peaks in the afternoon (see Larsson 2000; Attenborough 2007). So, if precise values of air absorption are needed in a field experiment, the relative humidity, atmospheric pressure, and air temperature must be measured over time and used in subsequent calculations (Wahlberg and Larsen 2017).
However, at the short distances (<100 m) where most acoustic communication between animals takes place and at frequencies below 10 kHz, the role of absorption in overall propagation loss is likely insignificant compared to other environmental factors. Garcia et al. (2012), for example, described the 40-Hz wing beat signals of drumming ruffed grouse (Bonasa umbellus). Theoretically, these sound signals would be reduced by 6 dB due to air absorption at a distance of 187 km from the drumming bird, whereas spherical spreading loss alone would have reduced the signal amplitudes to a level far below auditory threshold of most animals at a distance of 1 km already (PLsph = 60 dB re 1 m).
5.2.4 Reflection, Scattering, and Diffraction
A second and less predictable component of EA is the attenuation caused by reflection, scattering, and diffraction. As a sound wave hits a hard surface, it is reflected. Reflection can be explained with Huygens’ principle. In Fig. 5.8a, the rooster from Fig. 5.4a is very far away such that the wavefronts at any location appear planar (rather than circular) and the wave rays are parallel (rather than radial). Three incident rays are drawn, hitting the surface (e.g., a road) at times t1, t2, and t3. By Huygens’ principle, each point on the road that is hit acts as the source of a secondary wave. Two secondary wavefronts are shown at time t3. From the time t1, when the first ray hits, to the time t3, the first wavefront has expanded quite a bit. The second wavefront was started at time t2, when the second ray hit, and has expanded less by time t3. The third ray is just starting its secondary wave at time t3, with its secondary wavefront not yet visible. The tangent to the secondary wavefronts at time t3 gives the new wavefront of the reflected wave. The angle of incidence (measured from the normal) is equal to the angle of reflection (also measured from the normal). This is referred to as the law of reflection. It applies to the so-called specular reflection (as from a mirror).
Reflection is not always specular but might instead be diffuse. In diffuse reflection, sound is scattered from the surface in all sorts of directions including the specular direction (Fig. 5.8b). This happens when the surface is not smooth but rough. Scattering depends on the ratio of the wavelength of sound to the size of the scatterer. When the sound wavelength is long (i.e., frequency is low) relative to the roughness of the surface, all the sound energy is reflected in the specular direction. When the wavelength is short (i.e., frequency is high) and less than the magnitude of the unevenness of the surface, then sound is scattered in other, non-specular directions. A gravel road, for instance, produces specular reflection at frequencies below 15–20 kHz, but at higher frequencies, where the gravel roughness is large relative to the wavelength, sound is scattered in different directions (Michelsen and Larsen 1983).
Reverberation is a result of multiple reflections and refers to the phenomenon of sound persisting even if the source is turned off. In canyons, caves, or other enclosures, sound bounces off the boundaries again and again. The reverberant sound field is the space that is dominated by reflected sound (as opposed to the field near the source where the direct sound dominates). Once the source is switched off, the reverberant field will continue to exist for some time, yet decay due to absorption by the medium, boundaries (e.g., the walls of a music room), and absorbers in the room (e.g., furniture and people). The more reflective the boundaries, the greater the reverberation.
Reverberation severely alters the structure of the received sound and is one of the least wanted effects in analysis of recorded animal sounds (Fig. 5.9). This type of signal degradation with propagation distance can be quantified by measuring the blur-ratio (see e.g., Dabelsteen et al. 1993). The received sound appears longer in duration than the emitted sound, with the delayed echoes forming a resulting “tail.” This reverberation tail can be quantified as the tail-to-signal ratio (Holland et al. 2001). Consequently, leading edges of sound segments are relatively well-preserved, whereas ending edges are lost in reverberant environments.
Diffraction occurs when a sound wave is partially obstructed. In Fig. 5.10a, a plane wave (perhaps again from a far-away rooster) hits a wall with an opening in the center. The rays that hit the wall are reflected (not drawn). The rays that hit the opening pass straight through. By Huygens’ principle, each point of the opening acts as a source of secondary waves. As the secondary wavefronts expand, they superpose to form new wavefronts that appear to bend behind the wall. This is termed diffraction. It also occurs when the obstruction is finite (Fig. 5.10b).
If the object that is in the path of a propagating sound wave becomes much smaller than a wall (e.g., a bush or maybe just an insect in the air), to the point where the wavelength is much greater (at least by a factor 10) than the size of the object, then the sound wave “ignores” the object and propagates without obstruction. The sound effectively cannot “see” the object; it is too small. In laboratory experiments, bioacousticians should therefore make sure that objects in the sound path from loudspeaker to experimental animal are at least 10 times smaller than the wavelength of the stimulus sound (Larsen 1995). When the wavelength is of the same order of magnitude as the object, or somewhat greater, then diffractive scattering occurs (Bradbury and Vehrencamp 2011). As the name suggests, this is a combination of diffraction and scattering, whereby some sound bends around the object and some sound scatters in all directions, leading to a complicated sound field.
Different surfaces or materials exhibit different degrees of sound reflection, absorption, and transmission. A hard, compact, smooth surface (such as a paved road, ice sheet, cave wall, canyon, subterranean tunnel, burrow wall, or wall of a captive animal’s exhibit) reflects more and absorbs less acoustic energy than a porous, soft surface (such as tree leaves, grassy pastures, or forest canopy). Whether a surface or object is considered rough or smooth and hard or soft depends on the wavelength of the sound. In a mixed deciduous forest, reverberations for frequencies above 4 kHz are stronger with leaves on the trees than without leaves (Wiley and Richards 1982). Reverberations essentially are absent in an open field on a calm day.
5.2.5 Ground Effect
Another component of EA is the so-called ground effect, which is always present in terrestrial sound propagation. The sound signal from a sender (S) located at some height above ground (e.g., a bird at 4 m) will reach a receiver (R; e.g., a recordist’s microphone at 1.5 m) first by the direct path (PD) and a moment later by the indirect and longer path when the signal has been reflected from the ground (PG) (Fig. 5.11a). This results in a range-dependent interference pattern between the sound propagating along PD and PG. The interference pattern has regions of enhanced received level (due to constructive interference) and of attenuated received level (due to destructive interference) at the position of R (Fig. 5.11b). The received sound signal is a distorted version of the emitted signal. It is said to be comb-filtered, as the destructive interference creates the “comb teeth” attenuating some frequencies in the signal, whereas the constructive interference enhances other frequencies of the signal. The magnitude of the ground effect depends on sound frequency, on geometry of the sender-receiver separation distance and height above ground, on the roughness and softness of the ground, and on atmospheric pressure, ambient temperature, relative humidity, and turbulence (see Attenborough et al. 2007). Acoustically hard ground surfaces (such as rock or consolidated sand) produce comb-filter effects over a wide frequency range extending to relatively high frequencies, whereas acoustically soft surfaces (such as grasslands, forest floors, or unpacked snow) mainly generate the ground effect at low frequencies. Recordists may reduce the ground effect by placing microphones as high as practically possible above soft ground. For a general introduction to the phenomenon, see Michelsen and Larsen (1983) or Wahlberg and Larsen (2017). For a comparison between ground effect models and outdoor recordings, see Jensen et al. (2008).
5.2.6 Attenuation by Vegetative Cover
Absorption of sound by vegetation is a component of EA that can further dissipate airborne sounds over distance as acoustic energy is converted to heat in the plant material by viscous friction. The absorption of sound in vegetation depends on the material composition and hardness of the surfaces including the soft ground often found especially in woodland. Leaves absorb more sound energy than a tree trunk; whereas a tree trunk reflects more sound than leaves do. All of this is frequency-dependent.
This component of EA obeys no simple rules and needs to be measured by propagation experiments in the field (e.g., Dabelsteen et al. 1993). Aylor (1972a, b) measured sound propagation loss through various crops, bushes, and trees by broadcasting from a loudspeaker and recording at some distance with a microphone. He found foliage enhanced absorption and scattering. Price et al. (1988) modeled and measured attenuation by vegetation in different forest environments and documented scattering from tree trunks, enhanced ground effect in the presence of mature forest litter, and attenuation by foliage. Foliage attenuation had the greatest effect above 1 kHz and increased almost linearly with the logarithm of frequency. Through mixed coniferous forest, for instance, the attenuation over 24 m varied from about 5 dB at 2 kHz to 10 dB at 4 kHz, which is the range of dominant frequencies in many songbird songs. This foliage attenuation is less than, but needs to be added to, the 28-dB attenuation caused by spherical spreading over the same distance (Eq. 5.2).
Some research on sound propagation through vegetation was motivated by a desire to attenuate anthropogenic noise such as road noise, but generally and most surprisingly dense foliage only accounts for a small amount of attenuation. Martínez-Sala et al. (2006) concluded that a 15-m wide patch of regularly spaced trees could attenuate car noise by at least 6 dB. The effect was similar for more traditional noise barriers. Defrance et al. (2002), for instance, found that a 100-m wide forest strip was effective at providing an acoustical barrier to noise, such as shown in Fig. 5.12, where octave-band sound was broadcast through dense foliage and recorded at different distances in the forest.
At present, vegetation attenuation is not well understood. A much larger database is needed before it is possible to accurately predict the effect of different kinds of vegetation on sound propagation (see Attenborough et al. 2007).
5.2.7 Speed of Sound in Still Air
The speed of sound in still air is affected only by the ambient air temperature and, to a minimal extent, air pressure (or altitude). If the sound propagates under windy conditions, however, the effective speed of sound will be modified by the wind velocity such that the wind velocity of a tailwind will add to the speed of sound and the wind velocity of a headwind will subtract from the speed of sound.
The speed of sound determines the arrival time of a signal from the sender to the receiver and bends a propagating sound wave away from higher air temperature and towards lower air temperature (or from higher wind velocity towards lower wind velocity). The speed of sound in air at 21 °C is 344 m/s. At freezing point, 0 °C, the speed of sound in air is 331 m/s. A good approximation of the speed of sound c in dry air with 0.04% CO2 and temperature Tc (in °C) is:
5.2.8 Refraction by Air Temperature Gradients in Still Air
Refraction is the change of the direction of sound propagation due to changes in the speed of sound. In the example of Fig. 5.13a, a plane wave in medium 1 hits an interface with medium 2. Some of the acoustic energy might be reflected (as in Fig. 5.8a, not drawn in Fig. 5.13a), and some of the energy is transmitted. The transmitted wave is refracted, because the speeds of sound differ in the two media. If c1 > c2, then the transmitted wave bends towards the normal (i.e., away from the interface; Fig. 5.13a); if c1 < c2, then the transmitted wave bends away from the normal (i.e., towards the interface; Fig. 5.13b). The angles of incidence and refraction (transmission) are related via Snell’s law (named after Dutch astronomer and mathematician Willebrord Snell):
Note that, while the frequency of the sound does not change during transmission, the wavelength does change. With c = λf (see Chap. 4, section on the speed of sound), the wavelength is smaller in the medium with lower sound speed.
Refraction of sound waves in air is a common phenomenon due to vertical gradients of air temperature and/or wind velocity. A gradual change in sound speed is illustrated in Fig. 5.13b, where the rays bend more and more upwards as the sound speed increases. In terrestrial environments, the sound source is typically located close to the ground. A sound speed profile that has the speed of sound increase with altitude is downward refracting, while a sound speed profile that has the speed of sound decrease with altitude is upward refracting. Bent propagation paths have the effect that sound appears to arrive from a non-intuitive (i.e., not straight-line) direction. This phenomenon is like an acoustic mirage in analogy to optical mirages, which produce displaced images of far-away objects and which are also caused by refraction (of light).
The EA from refraction may be positive or negative, and so RL may be smaller or greater than predicted without a refracting atmosphere. Air temperature varies throughout the day and creates varying temperature gradients. So, recording at the same location at a different time of day can produce different results. Therefore, taking periodic measurements of the ambient temperature at different heights above the ground can provide the researcher with a notion of whether sound propagation is changing and at what pace.
In still air during daytime, the air is both warmer and more humid close to the ground and a stable air temperature gradient can be established with warmer air near the ground, because of sunlight heating the ground, which warms up much faster than the overlaying air. At higher elevations, the air temperature decreases by 0.01 °C/m (Fig. 5.14a). Sound waves consequently bend away from locations near the ground where the temperature is higher and upwards towards locations with lower temperatures (Fig. 5.14b). Horizontal rays will be directed upwards as will downwards directed rays after bouncing from the ground. Therefore, a certain limiting ray exists that defines a shadow zone around the sound source, where the sound level decreases way faster than predicted from distance alone (Fig. 5.14b). While the shadow zone cannot be reached by a direct path, it may be ensonified by reflection off houses (or other reflectors) in the vicinity and by paths passing through turbulence, and the shadow zone is thus not totally quiet.
For example, on a sunny day with little wind, the air temperature can be 30 °C at the ground (c = 351 m/s), but at 2–3 m above ground, the temperature may be only 25 °C (c = 347 m/s). This decrease continues up through the atmosphere by 1 °C/100 m, the so-called temperature lapse. With such an air temperature gradient, the sound rays from a sound source located a few meters above ground will bend upwards, because part of the wave closest to the warmer ground will travel the fastest. In a carefully conducted experiment, a combination of upward refraction, strong upwind propagation, and air absorption was measured to reduce the level of propagating sound at a distance of 640 m by up to 20 dB more than predicted from Eq. 5.2 (Attenborough 2007). Perhaps for this reason, birds do not commonly sing in open environments near the ground on sunny days. Rather, they sing in flight well above ground, or from a perch (Wiley 2009).
On calm nights, the opposite air temperature gradient can occur close to ground (called temperature inversion) as it cools faster than the overlaying air. Air temperatures increase up to 50–100 m above ground before decreasing again with altitude. Therefore, sound rays bend downwards and hit the ground (Fig. 5.15). A temperature inversion favors long-distance sound propagation as it leads to higher received levels than predicted by spherical spreading. For this reason, nocturnal communication distances of low-frequency African savanna elephant (Loxodonta africana) sound doubled on the savanna to as much as 10 km (Garstang et al. 1995). In these conditions, sound energy is channeled making spreading losses effectively cylindrical, rather than spherical within the surface layer. Garstang (2010) suggested that a loud infrasonic elephant call during the middle of the day would travel no more than 1 km (i.e., be heard over an area of 3 km2), but an elephant call at night might be heard over an area of 300 km2 (see also, Garstang et al. 1995; Larom et al. 1997). Elephants might adjust timing and abundance of their low-frequency calls and apply them specifically for long-distance communication according to atmospheric conditions.
An air temperature gradient can arise in other locations than just close to ground. Geiger (1965) found the air in and above the forest canopy beginning to warm immediately after sunrise, whereas the air below the canopy was slower to respond. This creates a bilinear sound speed profile with an upward refracting gradient above the canopy and a downward refracting gradient below the canopy. So, for a short period after sunrise, vocalizing birds and, for instance, howler monkeys (Alouatta sp.) located below the canopy can increase the range of their vocalizations relative to later in the day (Wiley and Richards 1978; Wiley 2009).
5.2.9 Refraction by Gradients of Wind Velocity
Strong air temperature gradients cannot exist during strong wind conditions, so the effects of wind velocity on sound propagation in open environments are more influential than air temperature gradients (Attenborough 2007). Wind may cause a shift in sound direction such that the appearance from where the sound is generated differs from where it is actually sent (acoustic mirage). Wind velocity gradients can enhance or impede sound propagation, leading to negative or positive EA. The actual speed of sound is the sum of the air temperature-generated speed of sound and the net wind velocity.
Attenborough et al. (2007) reported the general relationship between the sound speed profile c(z), the air temperature profile T(z), and the wind velocity profile u(z), where z is the height above ground, when the wind blows in the direction of sound propagation (when the wind blows against propagation, −u(z) is added):
Wind velocity is lowest at the ground and increases with altitude (Figs. 5.14c, 5.15c). Sound traveling upwind refracts upwards and sound traveling downwind refracts downward (Fig. 5.14b, Fig. 5.15b). As with temperature gradients, this creates a shadow zone upwind (Fig. 5.14b), where the sound is not heard. Downwind, sounds propagate in a channeled way (Fig. 5.15b) with less loss. Sound attenuates more against the wind than with the wind. Despite this common phenomenon, Wiley (2009) commented that there are no documented cases of animals selectively communicating downwind. But refraction by gradients of wind velocity played a significant role in Civil War battles in the rolling hills of the eastern U.S. There was no radio communication in the nineteenth century, so commanders often depended on what they heard of the battle in front of them to make decisions about troop movements. An acoustic shadow zone existed during the Battle of Gettysburg and commanders could not hear the sounds of battle just 10 miles away, whereas people 150 miles away in Pittsburgh clearly heard the skirmish (Ross 2000).
Sound maps portray the attenuation of sound over distance from a source. The maps take a bird’s-eye view, showing attenuation in 360° about a sound source. Such maps can be produced at a specific receiver altitude, or commonly show maximum received levels over a range of altitudes with the intent of yielding “conservative” estimates of received level. The attenuation pattern radiating from the sound source is typically irregular in shape (rather than concentric) and helps identify environmental conditions that impede or promote sound propagation. Sound mapping tools can commonly utilize data on topography and ground absorption, air temperature, and wind direction and speed. The example in Fig. 5.16 shows how wind attenuated noise from a gunshot upwind but enhanced received levels downwind.
5.2.10 Attenuation from Air Turbulence
Turbulence refers to unsteady and irregular motion of the air. It is very difficult to model and predict. It may be mechanically or thermally induced. Mechanical turbulence is caused by friction, for example, when air moves over rough ground or past obstacles such as houses and trees. Friction causes eddies and thus turbulence. This turbulence is stronger in higher wind speeds and rougher terrain. Turbulence is particularly great during fall winds, which shoot down the slope of a mountain. Thermal turbulence is created when the sun heats the ground unevenly. For example, bare ground warms up faster than fields with vegetative cover or bodies of water. Convective air currents are established with warm and less dense air rising and cold and denser air sinking. These currents, in turn, may generate eddies. Eddies may extend from the ground to a few hundred meters height. They can be of various sizes (height and diameter) and larger eddies may break up into smaller ones. Because of air temperature, gradients and wind, air is always in motion and this motion may always generate turbulence.
Turbulence causes EA, which increases with distance from the source, with the level of turbulence, and with sound frequency (see red curve in Fig. 5.11b). EA is typically highest during daytime and on hot sunny days. A characteristic of turbulence on sound propagation is that received levels at a fixed location quickly fluctuate with time and, at some range, this fluctuation stabilizes at a standard deviation of about 6 dB (Daigle et al. 1983). Van Staaden and Römer (1997), for instance, reported that at night, the sound pressure level of the song of an African bladder grasshopper (Bullacris intermedia) over open grassland was reduced with distance very close to the expected 6-dB per doubling of distance of spherical attenuation. However, during daytime, the attenuation was much larger and more variable due to air turbulence.
For more in-depth reading on outdoor sound propagation, please see Attenborough et al. (2007), Attenborough et al. (2007), Larsen and Wahlberg (2017), Wahlberg and Larsen (2017), or Larsen and Radford (2018).
5.3 The Source-Path-Receiver Model for Animal Acoustic Communication
The SPRM can be used to examine acoustic communication among animals. In the example of Fig. 5.17, two gentoo penguins (Pygoscelis papua) are communicating within their nesting colony in Antarctica. The sender (i.e., the source) emits a penguin display call. The call spreads through the habitat, experiencing various forms of attenuation. The receiver is another gentoo penguin. It might respond acoustically and thus become the next sender. Whether this two-way acoustic communication is successful, depends on a number of parameters.
The locations of sender and receiver matter; the closer together they are, the better the communication—most likely. If the source emission pattern is directional rather than omnidirectional (i.e., the call can be emitted in a specific direction), then the orientation of the sender towards the receiver matters. Similarly, if the receiver’s hearing is directional, then the receiver’s orientation affects communication success. A stronger source level will increase the likelihood of successful reception, unless the environment is highly reverberant, in which case the echoes would also be louder and potentially interfere with communication success. The frequency content of the call matters, because different frequencies propagate differently, and the hearing abilities of the receiver are frequency-dependent.
Along the path, some of the call energy is lost due to geometrical spreading and some is absorbed by the air, snow, and soil. The direction of propagation changes due to reflection and scattering off rocks, and due to refraction by sound speed gradients in air. Diffraction around mountains might play a role over longer ranges. Ambient noise in the environment does not affect sound propagation; i.e., it neither leads to attenuation nor changes the direction of propagation.
Ambient noise in the environment affects whether the call is received and correctly interpreted. Ambient noise can be of abiotic, biotic, or anthropogenic origin. Wind causes noise, as do waves and breaking ice. The other penguins in the colony create ambient noise with their own acoustic communications. Human presence (e.g., chatting tourists stomping through the snow towards the penguin colony) might add to the ambient noise. Ambient noise at the location of the receiver lowers the signal-to-noise ratio (SNR) at which the call is received. The critical ratios (specific to the receiver’s auditory system; see Chap. 10) dictate, below which SNR the call is masked by the ambient noise and thus not detected. At intermediate SNRs, the call might be detected, but not correctly interpreted. Masking-release processes (also specific to the receiver’s auditory system) include comodulation masking release and spatial release from masking (e.g., Erbe et al. 2016) and aid signal detection and interpretation. Ambient noise at the sender may lead to the Lombard effect (Lombard 1911), whereby the sender raises the source level of its call, actively changes the spectral characteristics to move sound energy out of the frequency band most at risk from masking, and repeats the call to increase the likelihood of reception. Finally, ambient noise may instill anti-masking strategies in both sender and receiver whereby they change their location and orientation (both towards each other) to foster communication success.
5.3.1 The Sender
In animal acoustic communication, the signal that is being sent depends on the sender’s species, demographic parameters, behavioral state, and many other factors. Obviously, different taxonomic groups produce different sounds, ranging from infrasonic rumbles of elephants to ultrasonic clicks of bats (see Chap. 8 on classifying animal sounds). But even closely-related species may be told apart acoustically. For example, Gerhardt (1991) found that the number of pulses in the advertisement call in male Eastern gray treefrogs (Dryophytes versicolor) and Cope’s gray treefrogs (Dryophytes chrysoscelis) is the major cue distinguishing sympatric males who are similar in size and color. While species-specific calls of bats have been recognized for decades (Balcombe and Fenton 1988; Fenton and Bell 1981; O’Farrell et al. 1999), more recently, acoustic differences have been noted in bat species that are difficult to tell apart morphologically (Gannon et al. 2001; Gannon et al. 2003; Gannon and Racz 2006). The more we record and document species’ repertoires, the more successful bioacousticians will become at identifying the sender’s species.
Within the same species, populations living in different geographic regions and habitats may exhibit differences in their sounds, as demonstrated for Italian vs. English tawny owls (Strix aluco; Galeotti et al. 1996), pikas (Ochotona spp.; Trefry and Hik 2010), and chimpanzees (Pan troglodytes schweinfurthii; Mitani et al. 1992). Animals can tell conspecifics from a different region or population apart. Auditory neighbor-stranger discrimination has been demonstrated, for instance, in concave-eared torrent frogs (Odorrana tormota; Feng et al. 2009) and alder flycatchers (Empidonax alnorum; Lovell and Lein 2004), where territory holders respond less aggressively towards played-back neighbor songs than to those of strangers, the “dear enemy effect.”
Not just population identity, but even individual identity may be encoded in the outgoing signal; for example, in oilbirds (Steatornis caripensis; Suthers 1994), banded mongoose (Mungos mungo; Fig. 5.18; Jansen et al. 2012), and in fallow deer (Dama dama; Vannoni and McEligott 2007). Galeotti and Pavan (1991) studied an urban population of non-songbirds, tawny owls, in Pavia, Italy, and demonstrated that the males’ territorial hoots have a clear species-specific structure with individual variations mainly in the final note of the call. Bats use individualized calls as they aggregate. For example, Melendez and Feng (2010) determined that communication calls of little brown bats (Myotis lucifugus) were individually distinct in minimum and maximum frequency, and call duration. Individual pallid bats (Antrozous pallidus) emitted unique calls below the frequency of their echolocation clicks and in the presence of other bats (Arnold and Wilkinson 2011). Wilkinson and Boughman (1998) provided evidence that the greater spear-nosed bat (Phyllostomus hastatus) used individual social calls to coordinate feeding on clumped nectar and fruit resources. Colonial animals, such as penguins, gulls, pinnipeds, and bats especially rely on individual acoustic recognition between a mother and offspring. These mothers often leave their young in a colony while they forage, so proper recognition of their own young upon return is important to fitness. Especially in birds without nests and physical landmarks such as king penguins (Aptenodytes patagonicus), acoustic recognition between parents and chicks becomes critical (Aubin and Jouventin 2002; Searby et al. 2004).
As organisms grow, their physical dimensions and size of their sound-producing organs become larger. Generally, emitted sounds transition from high-frequency, low-amplitude sounds to low-frequency, high-amplitude sounds (Hardouin et al. 2014). It is partly a consequence of the simple physiology that animals cannot efficiently emit sounds with wavelengths longer than the dimensions of their sound-emitting organs (e.g., see Michelsen 1992; Genevois and Bretagnolle 1994; Fletcher 2004, and Larsen and Wahlberg 2017). For instance, Charlton et al. (2011) reported that increased body size in male koalas (Phascolarctos cinereus) was reflected in the closer spacing of vocalization formants. (Formants refer to a concentration of acoustic energy around particular frequencies caused by resonances in the vocal tract.) Stoeger-Horwath et al. (2007) reported age-dependent variations in the grunt and trumpet calls of African savanna elephants. The grunts were only recorded in individuals less than 2 months of age and infants never produced trumpet calls until they were 3 months old. The authors also reported age-dependent variations in the low-frequency rumble; older individuals rumbled at a lower fundamental frequency than younger individuals, and there also was a tendency for rumble duration to increase slightly with age. Weddell seal (Leptonychotes weddellii) pups on rookeries emit high-frequency calls that transition into low-frequency adult calls used exclusively while hauled-out on the ice (Thomas and Kuechle 1982). Reby and McComb (2003) reported that lower-frequency male roars in red deer (Cervus elaphus) stags were associated with greater age and weight, so provided “honest” cues about reproductive condition.
In many species, sex-specific differences in the acoustic repertoires are employed to insure proper mate selection (Hardouin et al. 2014). The sender’s reproductive state and drive for mating often is represented in its acoustic signals. In songbirds and many orthopteran insects, only males sing (Miller et al. 2007; Riede et al. 2010). Songs are under the influence of reproductive hormones associated with courtship, and songbird songs are long, complex, and repeated in a typical and recognizable sequence of sounds. In species in which males compete acoustically to attract a female mate, a substandard mating call could indicate immaturity, agedness, or poor health of the caller. For example, Hardouin et al. (2007) examined hoots by 17 male scops owls (Otus scops) on the Isle of Oléron, France. Heavier male owls made lower-frequency hoots, which could give them a competitive mating advantage over lighter weight males.
Context further determines acoustic signaling. For example, predators often hunt quietly, and prey remain silent when it is aware of being stalked. A classic case where (prey) moths attempt to jam (predator) bat echolocation signals with a counter signal to confuse the approaching predator has developed another twist. Ter Hofstede and Ratcliffe (2016) found that, “specific predator counter-adaptations include calling at frequencies outside the sensitivity range of most eared prey, changing the pattern and frequency of echolocation calls during prey pursuit, and quiet, or ‘stealth,’ echolocation.” Acoustic interactions between a parent and offspring are often brief and relatively quiet to conceal and protect the young. In contrast, messages with a high reproductive value, such as mating calls or territorial defense calls, and calls with high survival value, such as infant distress calls or adult alarm calls, are produced loudly and repeatedly. To this point, it has been shown that distress calls of three species of pipistrelle bats (Pipistrellus nathusii, P. pipistrellus, and P. pygmaeus) were structurally convergent, “consisting of a series of downward-sweeping, frequency-modulated elements of short duration and high intensity with a relatively strong harmonic content” (Russ et al. 2004). The study suggested that it was not as important to have species-specific signals as it was to have some device that produced a mobbing by bats of the predator regardless of species of bat.
Ambient noise at the location of the sender may also affect signal emission level, repetition, and spectral shifts (collectively called the Lombard effect; Brumm and Zollinger 2011). For instance, male túngara frogs (Engystomops pustulosus) increased the level, repetition, and complexity of their calls when noise overlapped with their normal frequency band of calling but not when noise was higher and non-overlapping in frequency (Halfwerk et al. 2016). Brumm (2004) and Brumm and Todt (2003) noted that birds in a noisy environment called louder and more often, and repositioned themselves, possibly to increase the likelihood of the sound being received. Similarly, greater horseshoe bats (Rhinolophus ferrumequinum) increased their call level and shifted frequency in noisy environments (Hage et al. 2013). Eliades and Wang (2012) examined the neural processes underlying the Lombard effect in marmoset monkeys (Callithrix jacchus) and found that increased vocal intensity was accompanied by a change in auditory cortex activity toward neural response patterns observed during vocalizations under normal feedback conditions.
Many animal communication calls are close to being omnidirectional, radiating equally in all directions—at least at their lower frequencies (Larsen and Dabelsteen 1990). However, some bird species (e.g., juncos, warblers, and finches) showed an ability to focus their calls in the direction of an owl to warn-off the predator. Yorzinski and Patricelli (2009) examined the acoustic directionality of antipredator calls of 10 species of passerines and found that some birds would “call out of the side of their beaks” with their head pointed away from conspecifics in an apparent attempt at ventriloquist behavior. Whether terrestrial animals can actively change the sound emission directivity in response to noise (in order to enhance acoustic communication) needs to be investigated.
5.3.2 The Path and the Acoustic Environment
As the signal leaves the sender and travels through the environment, it is subjected to various forms of attenuation (as detailed above) and so the level at the receiver location is less than the source level. In addition, ambient noise at the receiver location reduces the SNR, making it harder for the receiver to detect the signal. Ambient noise may be classed according to its sources: abiotic, biotic, or anthropogenic. Chapter 7 provides a detailed overview of ambient noise with example spectrograms.
In terms of abiotic ambient noise, wind is a major contributor and its noise level increases with wind speed. In addition, remember that the direction of wind (i.e., upwind or downwind) affects the distance that sounds propagate. Wind drives other types of noise, such as noise from vegetation moving in the wind. Even without wind, there may be noise from branches creaking and breaking in the heat or noise from rustling leaves in the understory as animals walk through. Wind also drives waves; surf noise or noise from breaking waves is typical for coastal areas. Even without wind, moving water, such as waterfalls, can be noisy. Precipitation (i.e., rain, hail, thunder, and lightning) creates noise. Geological events such as earthquakes, seismic rumblings, and volcanic eruptions contribute noise to the terrestrial soundscape. In polar regions, melting ice and calving glaciers contribute to ambient noise.
Biotic ambient noise comes from animals in the environment. These can be of the same or different species from the target species. Several taxa call in large numbers at certain times of day and season, significantly raising ambient noise levels (e.g., chorusing cicadas, katydids, or frogs). Biologists typically think of soniferous animals as calling with specialized anatomies for sound production (i.e., syringes in birds and vocal cords in mammals). However, most animals also can produce mechanical sounds using external anatomies, such as wing-stridulation by a locust, abdomen vibration by a spider, beak-pecking by a woodpecker, teeth-chattering by a squirrel, foot-thumping by a rabbit, etc. In addition, animals can produce unintentional sounds, such as noise associated with rustling leaves as an animal walks through a forest, respiration noise, flight noise, feeding sounds, etc., not intended for communication with a conspecific. Example spectrograms for many of these sounds are found in Chap. 7 on soundscapes as well as Chap. 8 on detecting and classifying animal sounds.
Anthropogenic ambient noise is due to aircraft, road traffic, trains, ships, military activities, construction activities, etc. Increasing encroachment of human activities on animal habitats results in increased noise exposure for all taxa of animals (see Chap. 13 on noise impacts).
Ambient noise varies with time on scales of hours, days, lunar phase, season, and year. The reason is a combination of sound propagation effects and source behavior. The time of day and season of year affect sound propagation. As explained above, sounds can be heard from farther away during the night; for example, a train can be heard in the distance at night, but not during the day. Walking in the woods during the winter, the listener can hear sounds over much greater distances than during the summer with thick vegetation. In many animals, sound-production rates are highest during the breeding season. Chorusing insects, amphibians, and birds precisely time the commencement of their cacophonies to a breeding season each year. Amphibians stop calling when they go into winter hibernation, so chorusing can stop abruptly in late autumn. Some birds migrate, so their songs are missing from the winter soundscape. Many migrating birds are soniferous and their flight calls can temporarily dominate the soundscape as they pass through an area during a spring migration (e.g., a honking flock of migrating geese or a chirping flock of starlings). Yet, other species of birds remain in temperate areas over winter and produce sounds all year long (e.g., cardinals, sparrows, and snow juncos). Tropical insects, frogs, and birds can reproduce multiple times per year, they do not migrate or hibernate, and so are soniferous throughout the year. Diurnal cycles exist in all animals with birds calling in the morning, insects in the afternoon, frogs in the evening, and nocturnal animals in the middle of the night.
5.3.3 The Receiver
The same factors that can affect the sender also could affect the receiver’s ability to detect and interpret a signal (i.e., species, population, individual traits, age, sex, context, and ambient noise). On the species level, different species typically hear sound at different frequencies and levels. In other words, audiograms are species-specific (Fig. 5.19). Fortunately, data on hearing abilities of invertebrates, insects, reptiles, amphibians, fish, birds, and mammals continue to accumulate (see Volume 2). Nonetheless, there is some intra-species and individual variability in hearing (see Chap. 10).
In American mink (Neovison vison), for instance, hearing-sensitivity and frequency range changed markedly with postnatal age. Pups up to 32 days old were almost deaf, whereas three weeks later, their audiogram started to resemble that of an adult (in shape), but they remained less sensitive than adults, especially below 10 kHz (Brandt et al. 2013). There might be good reasons why hearing in young is immature. For example, a male fruit fly (Drosophila melanogaster) cannot hear the female’s flight tone until he is physically mature enough to mate (Eberl and Kernan 2011). This ensures the female fruit fly that any pursuing male is mature. Hearing capabilities further change over an adult’s life. Natural deterioration with age due to anatomical and physiological aging is a process called presbycusis. Hearing loss can also be caused by acute noise exposure at strong levels and chronic exposure to moderate noise (see Chap. 13). Hearing loss likely affects the ability of a receiver to hear and interpret a sender’s message. For example, a hearing-impaired moth, which typically avoids a bat predator through an evasive flight pattern, will be easier to capture if the bat’s echolocation signals are not heard.
The receiver’s sex rarely influences its hearing capabilities; however, Narins and Capranica (1976, 1980) provided an example of sex differences in the auditory reception system of a Puerto Rican treefrog, the coquina frog (Eleutherodactylus coqui). Male and female treefrogs responded to different notes of the male’s two-note, co-qui call. Females were attracted to the qui-part of the call. Males paid most attention to the co-part of the call, which was important in male–male aggressive interactions. The authors found that the inner ear basilar papilla was tuned differently in males and females; males had fewer fibers tuned to the qui-part of the call and females had fewer fibers tuned to the co-part of the call. These differences also occurred in higher-order neurons in the brain, where response decisions take place. Later studies (Mason et al. 2003) showed similar sexual differences in the middle ear of bullfrogs (Lithobates catesbeianus).
Ambient noise is a ubiquitous factor influencing signal reception and interpretation. Having experienced various forms of attenuation along its path, a signal will be audible if its amplitude remains above the power spectral density level of the ambient noise plus the critical ratio of the receiver. The critical ratio is essentially a minimum SNR needed for signal detection (see Chap. 10 for more information on the critical ratio). An even higher SNR is needed for signal discrimination, recognition, and finally, comfortable communication (Fig. 5.20; Lohr et al. 2003; Dooling et al. 2009; Dooling and Blumenrath 2013; Dooling and Leek 2018). Some birds take advantage of these limitations by producing both high-amplitude broadcast sounds and low-amplitude soft sounds. The former become public since they cover a large active space with many potential receivers whereas the latter become private as they cover a very small active space with only few receivers (Larsen 2020).
The auditory systems of some animals have built-in masking-release processes to reduce the impact of ambient noise. A spatial release from masking results from the directional hearing capabilities of the animal. If the signal arrives from a direction in which the receiver is more sensitive and if the noise arrives from a direction in which the receiver is less sensitive, then the reception directivity improves the SNR and the signal can be detected in higher ambient noise. A spatial release from masking has been demonstrated in several taxa including tropical crickets (Paroecanthus podagrosus and Diatrypa sp.; Schmidt and Römer 2011), gray treefrogs (Bee 2008), budgerigars (Melopsittacus undulatus; Dent et al. 1997), and pigmented Guinea pigs (Cavia porcellus; Greene et al. 2018). A comodulation masking release is possible if the noise is broadband and amplitude-modulated coherently across its frequencies. The animal might then utilize information about the noise from frequencies outside of the signal frequency to filter the noise within the frequency band of the signal. A comodulation masking release has been demonstrated in gray treefrogs (Bee and Vélez 2018), European starling (Sturnus vulgaris; Klump and Langemann 1995), and house mice (Mus musculus; Klink et al. 2010). Addionally, animals have a host of behavioral adaptations to optimize sound reception. For example, an animal may improve the SNR for sound arriving at its ears by approaching the source, tilting its head, adjusting its pinnae (in the case of mammals), or moving to another location away from a noise source (Nelson and Suthers 2004).
The Source-Path-Receiver Model (SPRM) is used widely in technical noise control and illustrates the importance of exploring a signal at all points between the source and receiver and of understanding factors that affect the observations. This chapter developed the SPRM for the example of animal acoustic communication (also see Chap. 11). The influences of the sender’s and receiver’s species, age, sex, individual identity, and behavioral status were discussed. The receiving animal’s hearing ability is a major factor for communication success.
Terminology related to sound propagation (or the path) was defined and basic concepts of outdoor sound propagation were developed, supported with simple equations. Several factors play an important role in sound propagation: distance between sender and receiver, air temperature, wind (direction and speed), obstacles along the path, and ground cover. The concepts of source level, received level, sound absorption, reflection, scattering, reverberation, diffraction, refraction, acoustic shadows, acoustic mirages, air temperature gradients, and wind speed gradients were illustrated. Two types of geometric spreading (i.e., spherical and cylindrical) were applied. Examples for ray tracing were provided. Ambient noise (including its abiotic, biotic, and anthropogenic sources) in terrestrial environments and its influence on both sender and receiver was discussed.
The SPRM may be applied to many other bioacoustic scenarios or studies such as animal biosonar (where the sender and receiver are the same individual; see Chap. 12) or the effects of noise on animals (where the source might be a highway; see Chap. 13). It would also be useful to consider passive acoustic monitoring (of animals or soundscapes) within the framework of the SPRM to understand the sound sources recorded, the way the environment affects the recorded soundscape, and the effects (and potential artifacts) of the recording system (i.e., the receiver; see Chaps. 2 and 7). The SPRM might also guide the bioacoustician in setting up audiometric experiments (where the source is an engineered signal; see Chap. 10). The SPRM is a fundamental concept helpful in bioacoustic study design and interpretation.
5.5 Additional Resources
The following sites were last accessed 3 February 2021.
NoiseModelling is a free software package developed by the French Government’s Centre National de la Recherche Scientifique and the Université Gustave Eiffel to produce sound maps: https://noise-planet.org/noisemodelling.html
Dan Russell’s Acoustics and Vibration Animations: https://www.acs.psu.edu/drussell/demos.html
Example SPRM for hazard control. Canadian Centre for Occupational Health and Safety, Government of Canada; https://www.ccohs.ca/oshanswers/hsprograms/hazard_control.html; accessed 4 December 2020.
Example SPRM for traffic noise. Environmental Protection Department, The Government of the Hong Kong Special Administrative Region https://www.epd.gov.hk/epd/noise_education/young/eng_young_html/m3/m3.html; accessed 4 December 2020.
Arnold B, Wilkinson G (2011) Individual specific contact calls of pallid bats Antrozous pallidus attract conspecifics. Behav Ecol Sociobiol 65:1581–1593. https://doi.org/10.1007/s00265-011-1168-4
Attenborough K (2007) Sound propagation in the atmosphere. In: Rossing TD (ed) Springer handbook of acoustics. Springer, New York, pp 113–147
Attenborough K, Taherzadeh S, Bass HE, Di X, Raspet R, Becker GR, Güdesen A, Chrestman A, Daigle GA, L’Espérance A, Gabillet Y, Gilbert KE, Li YL, White MJ, Naz P, Noble JM, van Hoof HAJM (1995) Benchmark cases for outdoor sound propagation models. J Acoust Soc Am 97(1):173–191. https://doi.org/10.1121/1.412302
Attenborough K, Li KM, Horoshenkov K (2007) Predicting outdoor sound. Taylor & Francis, London
Aubin T, Jouventin P (2002) How to vocally identify kin in a crowd: the penguin model. Adv Study Behav 31:243–277
Aylor D (1972a) Noise reduction by vegetation and ground. J Acoust Soc Am 51(2):197–205
Aylor D (1972b) Sound transmission through vegetation in relation to leaf area density, leaf width, and breadth of canopy. J Acoust Soc Am 51(1):411–414
Balcombe JP, Fenton MB (1988) The communication role of echolocation calls in vespertilionid bats. In: Nachtigall PE, Moore PWB (eds) Animal Sonar, NATO ASI science (Series A: Life Sciences), vol 156. Springer, Boston, MA
Bee MA (2008) Finding a mate at a cocktail party: spatial release from masking improves acoustic mate recognition in grey treefrogs. Anim Behav 75(5):1781–1791. https://doi.org/10.1016/j.anbehav.2007.10.032
Bee MA, Vélez A (2018) Masking release in temporally fluctuating noise depends on comodulation and overall level in Cope’s gray treefrog. J Acoust Soc Am 144(4):2354–2362. https://doi.org/10.1121/1.5064362
Bradbury JW, Vehrencamp SL (2011) Principles of animal communication, 2nd edn. Sinauer Associates, Sunderland, MA
Brandt C, Malmkvist J, Nielsen RL, Brande-Lavridsen N, Surlykke A (2013) Development of vocalization and hearing in American mink (Neovison vison). J Exp Biol 216:3542–3550
Brumm H (2004) The impact of environmental noise on song amplitude in a territorial bird. J Anim Ecol 73:434–440
Brumm H, Todt D (2003) Facing the rival: directional singing behaviour in nightingales. Behaviour 140(1):43–53
Brumm H, Zollinger SA (2011) The evolution of the Lombard effect: 100 years of psychoacoustic research. Behaviour 148(11–13):1173–1198. https://doi.org/10.1163/000579511X605759
Chappuis C (1971) Un exemple de l’influence du milieu sur les émissions vocales des oiseaux: L’évolution des chants en fôret équatoriale. Terre Vie 118:183–202
Charlton BD, Ellis WA, McKinnon AJ, Cowin GJ, Brumm J, Nilsson K, Fitch WT (2011) Cues to body size in the formant spacing of male koala (Phascolarctos cinereus) bellows: honesty in an exaggerated trait. J Exp Biol 214:3414–3422
Dabelsteen T (1981) The sound pressure level in the dawn song of the blackbird Turdus merula and a method for adjusting the level in experimental song to the level in natural song. Z Tierpsychol 56(2):137–149
Dabelsteen T, Larsen ON, Pedersen SB (1993) Habitat-induced degradation of sound signals: Quantifying the effects of communication sounds and bird location on blur ratio, excess attenuation, and signal-to-noise ratio in blackbird song. J Acoust Soc Am 93(4):2206–2220
Daigle GA, Piercy JE, Embleton T (1983) Line-of-sight propagation through atmospheric turbulence near the ground. J Acoust Soc Am 74(5):1505–1513
Defrance J, Barriere N, Premat E (2002) Forest as a meteorological screen for traffic noise. Proc. 9th ICSV, Orlando, FL, USA.
Dent ML, Larsen ON, Dooling RJ (1997) Free-field binaural unmasking in budgerigars (Melopsittacus undulatus). Behav Neurosci 111:590–598
Dooling RJ, Blumenrath SH (2013) Avian sound perception in noise. In: Brumm H (ed) Animal communication and noise, Animal signals and communication, vol 2. Springer-Verlag, Heidelberg, pp 229–250
Dooling RJ, Leek MR (2018) Communication masking by man-made noise. In: Slabbekoorn H, Dooling RJ, Popper AN, Fay RR (eds) Effects of anthropogenic noise on animals, Springer handbook of auditory research, vol 66, New York, pp 23–46
Dooling RJ, West EW, Leek MR (2009) Conceptual and computational models of the effects of anthropogenic noise on birds. Proc Inst Acoust 31(1):1
Eberl DF, Kernan MJ (2011) Recording sound-evoked potentials from the Drosophila antennal nerve. Cold Spring Harb Protoc 2011:prot5576
Eliades SJ, Wang X (2012) Neural correlates of the Lombard effect in primate auditory cortex. J Neurosci 32(31):10737–10748. https://doi.org/10.1523/JNEUROSCI.3448-11.2012
Erbe C, Reichmuth C, Cunningham KC, Lucke K, Dooling RJ (2016) Communication masking in marine mammals: a review and research strategy. Mar Pollut Bull 103:15–38. https://doi.org/10.1016/j.marpolbul.2015.12.007
Fay RR (1988) Hearing in vertebrates: a psychophysics databook. Hill-Fay Associates, Winnetka IL
Fay RR, Popper AN (1994) Comparative hearing: mammals. Springer handbook of auditory research series. Springer-Verlag, New York
Feng AS, Arch VS, Yu Z, Yu X-J, Xu Z-M, Shen J-X (2009) Neighbor–stranger discrimination in concave-eared torrent frogs, Odorrana tormota. Ethology 115(9):851–856
Fenton MB, Bell G (1981) Recognition of species of insectivorous bats by their echolocation calls. J Mammal 62:233–243
Fletcher NH (2004) A simple frequency-scaling rule for animal communication. J Acoust Soc Am 115:2334–2338
Galeotti P, Pavan G (1991) Individual recognition of male tawny owls (Strix aluco) using spectrograms of their territorial calls. Ethol Ecol Evol 3:113–126
Galeotti P, Appleby BM, Redpath SM (1996) Macro and microgeographical variations in the “hoot” of Italian and English Tawny Owls (Strix aluco). Ital J Zool 63:57–64
Gannon W, Racz GR (2006) Character displacement and ecomorphological analysis of two long-eared Myotis (M. auriculus and M. evotis). J Mammal 87(1):171–179
Gannon WL, Sherwin RE, de Carvalho TN, O’Farrell MJ (2001) Pinnae and echolocation call differences between Myotis californicus and M. ciliolabrum (Chiroptera: Vespertilionidae). Acta Chiropterologica 3(1):77–91
Gannon WL, O’Farrell MJ, Corben C, Bedrick EJ (2003) Call character lexicon and analysis of field recorded bat echolocation calls. In: Thomas JA, Moss CF, Vater M (eds) Echolocation in bats and dolphins. Univ Chicago Press, Chicago, IL, pp 478–484
Garcia M, Charrier I, Rendall D, Iwaniuk AN (2012) Temporal and spectral analyses reveal individual variation in a non-vocal acoustic display: the drumming display of the ruffed grouse (Bonasa umbellus, L.). Ethology 118(3):292–301
Garstang M (2010) Elephant infrasounds: Long-range communication. In: Brudzynski SM (ed) Handbook of mammalian vocalization: an integrative neuroscience approach. Elsevier BV, Oxford
Garstang M, Larom D, Raspe R, Lindeque M (1995) Atmospheric controls on elephant communication. J Exp Biol 198:939–951
Geiger R (1965) The climate near the ground. Harvard University Press, Cambridge, MA
Genevois F, Bretagnolle V (1994) Male blue petrels reveal their body mass when calling. Ethol Ecol Evol 6:377–383
Gerhardt HC (1991) Female mate choice in treefrogs: static and dynamic acoustic criteria. Anim Behav 42(4):615–635
Greene NT, Anbuhl KL, Ferber AT, DeGuzman M, Allen PD, Tollin DJ (2018) Spatial hearing ability of the pigmented Guinea pig (Cavia porcellus): minimum audible angle and spatial release from masking in azimuth. Hear Res 365:62–76. https://doi.org/10.1016/j.heares.2018.04.011
Hage SR, Jiang T, Berquist SW, Feng J, Metzner W (2013) Lombard effect in horseshoe bats. Proc Natl Acad Sci 110(10):4063–4068. https://doi.org/10.1073/pnas.1211533110
Halfwerk W, Lea AM, Guerra MA, Page RA, Ryan MJ (2016) Vocal responses to noise reveal the presence of the Lombard effect in a frog. Behav Ecol 27(2):669–676. https://doi.org/10.1093/beheco/arv204
Hardouin LA, Reby D, Bavoux C, Burneleau G, Bretagnolle V (2007) Communication of male quality in owl hoots. Am Nat 169(4):552–562
Hardouin LA, Thompson R, Stenning M, Reby D (2014) Anatomical bases of sex- and size-related acoustic variation in herring gull alarm calls. J Avian Biol 45:157–166
Heffner HH (1983) Hearing in large and small dogs: absolute thresholds and size of the tympanic membrane. Behav Neurosci 97:310–318
Heffner HE, Heffner RS (2007) Hearing ranges of laboratory animals. J Am Assoc Lab Anim Sci 46(1):20–22
Heller EJ (2013) Why you hear what you hear. Princeton University Press, Princeton, NJ
Holland J, Dabelsteen T, Pedersen SB, Paris AL (2001) Potential ranging cues contained within the energetic pauses of transmitted wren song. Bioacoustics 12(1):3–20
Jansen DA, Cant MA, Manser MB (2012) Segmental concatenation of individual signatures and context cues in banded mongoose (Mungos mungo) close calls. BMC Biol 10(1):97. https://doi.org/10.1186/1741-7007-10-97
Jensen KK, Larsen ON, Attenborough K (2008) Measurements and predictions of hooded crow (Corvus corone cornix) call propagation over open field habitats. J Acoust Soc Am 123(1):507–518
Klink KB, Dierker H, Beutelmann R et al (2010) Comodulation masking release determined in the mouse (Mus musculus) using a flanking-band paradigm. JARO 11:79–88. https://doi.org/10.1007/s10162-009-0186-7
Klump GM, Langemann U (1995) Comodulation masking release in a songbird. Hear Res 87(1):157–164. https://doi.org/10.1016/0378-5955(95)00087-K
Krokstad A, Svensson UP, Strøm S (2015) The early history of ray tracing in acoustics. In: Xiang N, Sessler G (eds) Acoustics, information, and communication. Modern acoustics and signal processing. Springer, Cham
Larom DL, Garstang M, Payne RR, Lindeque M (1997) The influence of surface atmospheric conditions on the range and area reached by animal vocalizations. J Exp Biol 200:421–431
Larsen ON (1995) Acoustic equipment and sound field calibration. In: Klump GM, Dooling RJ, Fay RR, Stebbins WC (eds) Methods in comparative psychoacoustics. Birkhäuser Verlag, Basel, pp 31–45
Larsen ON (2020) To shout or to whisper? Strategies for encoding public and private information in sound signals. In: Aubin T, Mathevon N (eds) Coding strategies in vertebrate acoustic communication, Animal signals and communication, vol 7. Springer Nature Switzerland AG, Cham, pp 11–44
Larsen ON, Dabelsteen T (1990) Directionality of blackbird vocalization. Implications for vocal communication. Ornis Scand 21:37–45
Larsen ON, Radford C (2018) Acoustic conditions affecting sound communication in air and underwater. In: Slabbekoorn H, Dooling RJ, Popper AN, Fay RR (eds) Effects of anthropogenic noise on animals. Springer handbook of acoustic research. Springer, New York, pp 109–144
Larsen ON, Wahlberg M (2017) Sound and sound sources. In: Brown CH, Riede T (eds) Comparative bioacoustics: an overview. Bentham Science Publishers, Sharjah, pp 3–62
Larsson C (2000) Weather effects on outdoor sound propagation. Int J Acoust Vib 5(1):33–36
Lipman EA, Grassi JR (1942) Comparative auditory sensitivity of man and dog. Amer J Psychol 55:84–89
Lohr B, Wright TF, Dooling RJ (2003) Detection and discrimination of natural calls in masking noise by birds: estimating the active space of a signal. Anim Behav 65:763–777
Lombard É (1911) Le signe de l’élévation de la voix. Annales des Maladies de L’Oreille et du Larynx XXXVII(2):101–109
Lovell SF, Lein MR (2004) Neighbor-stranger discrimination by song in a suboscine bird, the alder flycatcher, Empidonax alnorum. Behav Ecol 15(5):799–804
Marten K, Marler P (1977) Sound transmission and its significance for animal vocalization. I. Temperate habitats. Behav Ecol Sociobiol 2(3):271–290
Martínez-Sala R, Rubio C, García-Raffi LM, Sánchez-Pérez JV, Sánchez-Pérez EA, Llinares J (2006) Control of noise by trees arranged like sonic crystals. J Sound Vib 291:100–106
Mason MJ, Lin CC, Narins PM (2003) Sex differences in the middle ear of the bullfrog (Rana catesbeiana). Brain Behav Evol 61(2):91–101
Melendez KV, Feng AS (2010) Communication calls of little brown bats display individual-specific characteristics. J Acoust Soc Am 128:919–923
Michelsen A (1978) Sound reception in different environments. In: Ali MA (ed) Sensory ecology, NATO Adv Study Inst Ser, vol 18. Plenum Press, London, pp 345–373
Michelsen A (1992) Hearing and sound communication in small animals: evolutionary adaptations to the laws of physics. In: Webster DB, Fay RR, Popper AN (eds) The evolutionary biology of hearing. Springer-Verlag, New York, pp 61–77
Michelsen A, Larsen ON (1983) Strategies for acoustic communication in complex environments. In: Huber F, Markl H (eds) Neuroethology and behavioral physiology. Springer-Verlag, Berlin, pp 321–331
Miller EH, Williams J, Jamieson SE, Gilchrist HG, Mallory ML (2007) Allometry, bilateral asymmetry, and sexual differences in the vocal tract of common eiders Somateria mollissima and king eiders S. spectabilis. J Avian Biol 38:224–233
Mitani JC, Hasegawa T, Gros-Louis J, Marler P, Byrne R (1992) Dialects in wild chimpanzees? Am J Primatol 27:233–243. https://doi.org/10.1002/ajp.1350270402
Narins PM, Capranica RR (1976) Sexual differences in the auditory system of the tree frog Eleutherodactylus coqui. Science 192(4237):378–380
Narins PM, Capranica RR (1980) Neural adaptations for processing the two-note call of the Puerto Rican tree frog, Eleutherodactylus coqui. Brain Behav Evol 17:48–66
Nelson BS, Suthers RA (2004) Sound localization in a small passerine bird: discrimination of azimuth as a function of head orientation and sound frequency. J Exp Biol 207:4121–4133
O’Farrell MJ, Miller BW, Gannon WL (1999) Qualitative identification of free-flying bats using the Anabat detector. J Mammal 80(1):11–23
Ottemöller L, Evers LG (2008) Seismo-acoustic analysis of the Buncefield oil depot explosion in the UK, 2005 December 11. Geophys J Int 172(3):1123–1134
Price MA, Attenborough K, Heap NW (1988) Sound attenuation through trees: measurements and models. J Acoust Soc Am 84:1836–1844
Reby D, McComb K (2003) Anatomical constraints generate honesty: acoustic cues to age and weight in the roars of red deer stags. Anim Behav 65:519–530
Riede T, Fisher JH, Goller F (2010) Sexual dimorphism of the zebra finch syrinx indicates adaptation for high fundamental frequencies in males. PLoS One 5:e11368
Römer H (1998) The sensory ecology of acoustic communication in insects. In: Hoy RR, Popper AN, Fay RR (eds) Comparative hearing: insects. Springer handbook of auditory research. Springer-Verlag, New York, pp 63–96
Ross CD (2000) Outdoor sound propagation in the US Civil War. Appl Acoust 59:137–147
Russ J, Jones G, Mackie I, Racey P (2004) Interspecific responses to distress calls in bats (Chiroptera: Vespertilionidae): A function for convergence in call design? Anim Behav 67:1005–1014. https://doi.org/10.1016/j.anbehav.2003.09.003
Schmidt AKD, Römer H (2011) Solutions to the cocktail party problem in insects: selective filters, spatial release from masking and gain control in tropical crickets. PLoS One 6(12):e28593. https://doi.org/10.1371/journal.pone.0028593
Searby A, Jouventin P, Aubin T (2004) Acoustic recognition in macaroni penguins: an original signature system. Anim Behav 67:615–625
Stoeger-Horwath AS, Stoeger S, Schwammer HM, Kratochvil H (2007) Call repertoire of infant African elephants: first insights into the early vocal ontogeny. J Acoust Soc Am 121(6):3922–3931
Suthers RA (1994) Variable asymmetry and resonance in the avian vocal tract: a structural basis for individually distinct vocalizations. J Comp Physiol A 175:457–466
ter Hofstede HM, Ratcliffe JM (2016) Evolutionary escalation: the bat-moth arms race. J Exp Biol 219(11):1589–1602. https://doi.org/10.1242/jeb.086686
Thomas JA, Kuechle V (1982) Quantitative analysis of the underwater repertoire of the Weddell seal (Leptonychotes weddellii). J Acoust Soc Am 72:1730–1738
Trefry SA, Hik DS (2010) Variation in pika (Ochotona collaris, O. princeps) vocalizations within and between populations. Ecography 33:784–795
Van Staaden M, Römer H (1997) Sexual signalling in bladder grasshoppers: tactical design for maximizing calling range. J Exp Biol 200:2597–2608
Vannoni E, McEligott AG (2007) Individual acoustic variation in fallow deer (Dama dama) common and harsh groans: a source-filter theory perspective. Ethology 113:223–234
Wahlberg M, Larsen ON (2017) Propagation of sound. In: Brown CH, Riede T (eds) Comparative bioacoustics: an overview. Bentham Science Publishers, Sharjah, pp 63–121
Warfield D (1973) The study of hearing in animals. In: Gay W (ed) Methods of animal experimentation, IV. Academic Press, London, pp 43–143
West CD (1985) The relationship of the spiral turns of the cochlea and the length of the basilar membrane to the range of audible frequencies in ground dwelling mammals. J Acoust Soc Am 77:1091–1101
Wiley RH (2009) Signal transmission in natural environments. In: Squire LR (ed) Encyclopedia of neuroscience, vol 8. Academic Press, Oxford, pp 827–832
Wiley RH, Richards DG (1978) Physical constraints on acoustic communication in the atmosphere: Implications for the evolution of animal vocalizations. Behav Ecol Sociobiol 3:69–94
Wiley RH, Richards DG (1982) Adaptations for acoustic communication in birds: sound transmission and signal detection. In: Kroodsma DE, Miller EH, Quellet H (eds) Acoustic communication in birds. Academic Press, New York, pp 131–181
Wilkinson GS, Boughman JW (1998) Social calls coordinate foraging in greater spear-nosed bats. Anim Behav 55:337–350
Yorzinski JL, Patricelli GL (2009) Birds adjust acoustic directionality to beam their antipredator calls to predators and conspecifics. Proc R Soc B 277:923–932
We wish to thank Prof. Keith Attenborough for his constructive review of this chapter.
Editors and Affiliations
© 2022 The Author(s)
About this chapter
Cite this chapter
Larsen, O.N., Gannon, W.L., Erbe, C., Pavan, G., Thomas, J.A. (2022). Source-Path-Receiver Model for Airborne Sounds. In: Erbe, C., Thomas, J.A. (eds) Exploring Animal Behavior Through Sound: Volume 1. Springer, Cham. https://doi.org/10.1007/978-3-030-97540-1_5
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-97538-8
Online ISBN: 978-3-030-97540-1