5.1 Introduction

The source-path-receiver model (SPRM) provides a common framework for occupational health and safety management. It is used for hazard control to minimize the risk of exposing workers to hazards. Such hazards may be chemicals (e.g., spilled compounds in a pharmaceutical laboratory), material (e.g., falling bricks on a construction site), or noise.

An example SPRM for chemical hazards is shown in Fig. 5.1a. The source is a poisonous chemical, which leaks through the air inside a laboratory, and the receiver is a pharmaceutical worker. The SPRM guides the health and safety manager in minimizing the risk of exposure.Footnote 1 Ideally, the source would be eliminated, but this might not be possible if this type of chemical is required. Maybe it can be substituted by a less volatile or toxic chemical? There may be engineering controls such as installing an isolation chamber (or glove box) or exhaust hood. Engineering controls may also be applied to the path along which the chemical travels: installing ventilators, absorbing material, or mechanical barriers, or simply extending the length of the path to increase dilution. Finally, controls may be applied at the receiver: proper training for safe handling of the chemical, limiting work hours, rotating shifts, and wearing personal protective equipment (PPE). In terms of reducing the risk of exposure, the measures rank from most to least effective (termed hierarchy of control): elimination, engineering controls, procedural controls, and finally, PPE.

Fig. 5.1
figure 1

Examples of the source-path-receiver model for (a) chemical hazard control in a laboratory and (b) traffic noise control in a city

The SPRM applied to noise control helps break down the components of noise exposure that can be modified to reduce the risk of acoustic impacts. In the example of Fig. 5.1b, the source is a busy downtown road. Noise from the cars travels to surrounding residential buildings.Footnote 2 The source may be eliminated by relocating all traffic to an inner-city bypass and banning all traffic downtown. Maybe private car traffic can be substituted by a quieter, electric city bus service. Imposing a speed limit reduces noise. Some cities enforce noise emission standards for cars. Long-term engineering solutions may include building a tunnel, resurfacing the road with noise-absorbing material, installing noise barrier walls along the road, or erecting earth bunds. Residential buildings may have noise-reduction (double-glazed) windows and residents may set up their bedrooms at the opposite side of the building. The specific implementation of the SPRM depends on the application. For example, residents in an apartment building would not want to wear earmuffs at home, but for workers in a noisy plant, such PPE is common practice. A poster showing the steps involved in workplace noise control is shown in Fig. 5.2.

Fig. 5.2
figure 2

Poster by WorkSafe New Zealand illustrating the steps involved in noise control at the workplace. © WorkSafe, New Zealand Government, 2018; https://www.worksafe.govt.nz/dmsdocument/3987-managing-noise-risk-poster. Reproduced with permission; https://www.worksafe.govt.nz/about-us/about-this-site/copyright/. A more elaborate animation is also available (Animation of the SPRM by WorkSafe, New Zealand Government; https://youtu.be/8Cq5UR5KssA; accessed 4 December 2020.)

Even though the SPRM was originally developed to manage hazards at the workplace, it is much more broadly applicable to the day-to-day lives of humans—and animals. In fact, the SPRM is fundamental. Without a receiver, there is no hazard. Without a listener, there is no noise. Researchers of animal bioacoustics might want to apply the SPRM to their project in order to identify parameters of the source, path, and receiver, that might influence the results. Other chapters in this book either explicitly or implicitly apply the SPRM. Chapter 13 on the effects of noise on animals provides examples where the source is a highway, the path follows from the highway into the surrounding bush, and the receivers are birds, whose abundance might decrease closer to the source as a result of habitat degradation by noise. Chapter 11 deals with acoustic communication between animals, and so the source may be a male frog, the path may lead through a tropical rain forest, and the receivers are nearby females of the same species. Chapter 12 is about echolocation. Here, the source and the receiver are the same individual animal. A bat echolocates on a moth and the echolocation signal reflects off the moth, informing the bat how far away its prey is. The signal travels through the environment twice: from the bat to its prey and back. Chapter 10 covers audiometry, where the sources are controlled and engineered signals (often pure tones) that are played to animals over short distances or through earphones, and the receivers are individual animals whose hearing is being measured. Chapter 7 explores soundscapes on land and under water. The sources are grouped into geophony (e.g., wind, rain, and waves), biophony (i.e., animals), and anthropophony (e.g., airplanes or ships). The paths go through the air over land, under water, and through the ground. The receivers in passive acoustic monitoring of soundscapes are recorders, which collect and store acoustic data for later analysis in the laboratory. The following sections first explore the basic concepts of sound propagation in air before applying these to an example SPRM.

5.2 Sound Propagation in Terrestrial Environments

The environment through which a sound travels alters its acoustic features such as its spectral composition and level. The effects of the environment on bioacoustic signals were well explored in the classic works of Chappuis (1971), Marten and Marler (1977), Michelsen (1978), and Wiley and Richards (1978).

Airborne sound propagation (often called outdoor sound propagation) is characterized by a number of phenomena. Sounds attenuate with distance from the sender due to geometrical attenuation (i.e., spreading) and absorption by the medium. High-frequency sounds (i.e., sounds having short wavelengths; see Chap. 4 on definitions of frequency and wavelength) propagate over shorter distances than low-frequency sounds (i.e., sounds having long wavelengths). Environmental and structural factors such as substrate composition; terrain profile; obstacles along the path; amount of vegetative cover; wind speed and direction; vertical gradients (i.e., increases or decreases) in wind speed, air temperature, and humidity; air turbulence; and, to a small degree, altitude (i.e., atmospheric pressure) affect sound propagation in air (Fig. 5.3). The propagation paths, along which sounds travel, are rarely straight lines, but rather bend (i.e., refract or diffract), reflect, and scatter. The same sound traveling along different propagation paths may interfere with itself constructively or destructively. The received sound is a weaker and often distorted version of the sent sound (Wahlberg and Larsen 2017).

Fig. 5.3
figure 3

Diagram of some of the factors affecting sound propagation in air. Figure donated by Sara Torres Ortiz

This section explains the basic concepts of sound propagation in air and provides some insights into environmental effects on propagation. Some environmental factors (e.g., air temperature, wind speed and direction, and humidity) vary throughout the day and among seasons, and so sound propagation can be quite variable. Sound propagation models exist and can be used to predict the distance over which sounds travel, create noise maps, estimate changes to the acoustic (e.g., spectral) features of received sounds, and identify factors that could hinder or enhance animal communication (see Lohr et al. 2003; Jensen et al. 2008). Bioacousticians should consider the characteristics of sound propagation, which could explain variability in the receiver’s behavioral response or the effectiveness of acoustic communication.

5.2.1 Ray Traces

Sound propagation is accurately described by the acoustic wave equation. This is a four-dimensional (4-d: three spatial coordinates and time) differential equation of the second order. For an “easy” derivation of the acoustic wave equation, see Larsen and Radford (2018). However, in the simplest situation of symmetric geometry (i.e., omnidirectional signal in a homogeneous medium with no reverberation), the equation can be simplified and described by one variable: the range to the source (Wahlberg and Larsen 2017). Even then, solving the wave equation under the various and variable conditions encountered in common sound propagation scenarios is quite a task. Fortunately, there are much simpler, conceptual principles of sound propagation, which can yield satisfactory results. One such concept is ray propagation or ray tracing.

Let us consider an omnidirectional source, which emits sound equally in all directions. An example is the crowing rooster in Fig. 5.4a (although it is only omnidirectional at the lower frequencies of its crow and it might not typically crow while roosting, but for the sake of science…; Larsen and Dabelsteen 1990). Wave rays point in the direction of sound propagation and are perpendicular to the wavefronts of the propagating sound. The wavefronts are spheres in 3D space (circles in 2D). Huygens’ principle (named after Christiaan Huygens, a Dutch physicist) states that every point on a wavefront can be considered a source of a new (secondary) wave. And all of the secondary wavefronts superpose to build the next (in time) primary wavefront. The wavefront at time t3 in Fig. 5.4a is also shown in Fig. 5.4b. Nine example points on this wavefront are “randomly” illustrated (as small suns). These each create their own set of concentric wavefronts, drawn at time t4. The secondary waves cancel out in some places but at the farthest range from the rooster in the center, the secondary wavefronts line up to yield the new primary wavefront at time t4.

Fig. 5.4
figure 4

(a) Sketch of a rooster sitting on a branch. When the bird crows, sound is emitted in all directions (marked by a few example black arrows). The green concentric circles represent the wavefronts of the outgoing sound at times t1 − t4. The wave rays are perpendicular to the wavefronts and point in the direction of sound propagation. (b) Illustration of Huygens’ principle. Each point on the wavefront at time t4 can be considered itself a (secondary) source; nine example points are marked by suns. The wavefronts of the secondary sources (shown as black circle segments) superpose to yield the new primary wavefront, drawn at time t4

As the expanding wavefront encounters features of the environment (e.g., vegetation or gradients in sound speed), its shape changes and the directions of the wave rays change. The laws of physics and principles of sound propagation can be applied to trace the propagation paths. This is called ray tracing. For an easy introduction to ray tracing, see Heller (2013). Wahlberg and Larsen (2017) suggested visualizing a ray as a “small acoustic particle travelling along a narrow beam or ray in discrete steps and bouncing-off or being refracted through surfaces.” This type of sound field visualization, first introduced in 1967 (Krokstad et al. 2015), has been used extensively in linear acoustics to model phenomena in outdoor sound propagation with the computational tools now available with computers (Attenborough et al. 1995).

An example of ray tracing is shown in Fig. 5.5. The omnidirectional source is located in the lower left corner, 5 m above ground at range 0, and it emits a 10-Hz tone. The wave rays are shown and follow the sound propagation paths. Sound that is initially emitted in an upwards direction bends downward at a certain altitude (depending on its initial angle of emission). This is typical for nighttime sound propagation. Once rays hit the ground, they are reflected upwards again. The sound field (i.e., the received level at every location in space) is computed by summing sound pressure over all rays. Regions where rays travel close together have high received levels (little propagation loss) and regions that only a few rays enter have low received levels (high propagation loss).

Fig. 5.5
figure 5

Top: Ray traces modeling the propagation of an airborne 10-Hz tone from a point source located 5 m off the ground (lower left corner). The model suggests that sound is bent downwards (downward refraction, typical for nighttime) where it bounces off the ground several times depending on the initial direction from the source. Note the scales: These effects occur at distances much longer than typical animal sound communication distances, which normally are up to only a few hundred meters. Bottom: Contour plot of propagation loss, PL (i.e., attenuation) of the 10-Hz sound. Modified from Attenborough et al. (1995). © Acoustical Society of America, 1995. All rights reserved

For example, Ottemöller and Evers (2008) used ray tracing to describe the sound propagation of a massive vapor cloud explosion at Buncefield fuel depot near Hemel Hempstead, UK, on the morning of 11 December 2005. The storage tank overflowed and released over 300 tons of fuel. An explosion was triggered after a vapor cloud formed and spread over a very large area (80,000 m2 or about 20 acres) before igniting. The explosion was huge, caused extensive damage, injured 43 people, and was detected by seismograph stations in the UK and the Netherlands. The data provided significant information on the ray trajectories of this explosion.

5.2.2 Geometrical Sound Spreading

Sound from an omnidirectional source in the free-field spreads out evenly in a spherical pattern (i.e., equally in all directions). The free-field is homogeneous (i.e., has no temperature or humidity gradients) and unimpeded by buildings or vegetation. At any receiver location in space, only a small proportion of the emitted sound arrives, and so the received sound is attenuated compared to the sound energy emitted at the source. The total attenuation or loss of sound energy from the source to a receiver is known as propagation loss (PL; formerly transmission loss). The sound pressure level at the source (defined as 1 m from a point source; see Chap. 4) is called the source level (SL), whereas the sound pressure level at the receiver at a distance (i.e., range r) from the source is called the received level (RL). The relation between these two levels is given by Eq. 5.1:

$$ RL= SL\hbox{--} PL $$
(5.1)

Propagation loss in the free-field is termed spherical spreading loss, which can be computed as PLsph = 20 log10(r) (for derivation of this expression, see Wahlberg and Larsen 2017). It is independent of signal frequency and only depends on the geometry of the source and sound field. So, Eq. 5.1 may be reformulated:

$$ RL= SL-20{\log}_{10}(r) $$
(5.2)

As a first approximation, spherical spreading is a good model for the propagation of terrestrial animal sounds produced in large open-air regions, such as grassland. Generally, if a bird sings on the ground up to about 10 m from a microphone, only spherical spreading needs to be considered. If the receiver is at a greater distance from the bird, then ground and atmospheric effects also must be considered. If the bird is flying overhead, then spherical spreading and atmospheric effects need to be considered when determining propagation characteristics.

If other sources of attenuation are negligible, then Eq. 5.2 can be used to calculate the source levels of a vocalizing animal located at distance r from the receiver. For instance, if a bioacoustician measured RL = 65 dB re 20 μPa at a distance of 10 m from a singing bird, then SL (at 1 m from the bird) becomes 65 dB re 20 μPa + 20 log10(10) dB re 1 m = 85 dB re 20 μPa m (e.g., Dabelsteen 1981). Similarly, if somebody played back a sound at a known source level of 85 dB re 20 μPa m, then the predicted RL at 1 km (= 103 m) range would be 25 dB re 20 μPa, as 20 log10(103) = 60.

In some environments, and for some sources (i.e., line sources rather than point sources), airborne sound propagation can be better described as cylindrical spreading. For an infinitely long line source, the propagation loss as a function of range becomes PLcyl = 10 log10(r) and so Eq. 5.1 becomes:

$$ RL= SL-10{\log}_{10}(r) $$
(5.3)

Most biological line sources, however, are finite, such as a row of vocalizing birds on a power line. (Please be aware that this example is not a line source in the strict acoustic sense.) This means that geometrical spreading loss is somewhere between that of spherical and cylindrical spreading loss (Fig. 5.6). When the receiver distance from the finite line source is much less than the length of the finite line source, then the attenuation is close to that of an infinite line source (i.e., 10 log10(r)), whereas at distances comparable to or larger than the length of the finite line source, the latter acts more like a point source and attenuation develops as 20 log10(r). At sufficiently long distances, all sources can be regarded as point sources.

Fig. 5.6
figure 6

Propagation loss due to geometrical spreading in air from a finite length line source with distance r relative to the length L of the finite line source. At distances from the source shorter than L, the attenuation is close to 3 dB/dd (cylindrical attenuation), whereas at distances equal to or longer than L, the attenuation becomes 6 dB/dd (spherical attenuation); dd: distance doubled

The propagation loss, however, includes much more than geometrical spreading loss, since beyond some distance from the source, RL mostly becomes smaller with distance than predicted by Eqs. 5.2 or 5.3. To account for this extra attenuation, Marten and Marler (1977) introduced the term excess attenuation (EA). This includes a number of other effects such as atmospheric absorption, reflection and scattering, the ground effect, attenuation by vegetative cover, refraction by air temperature and wind gradients, and attenuation due to turbulence—and often there still is a rest attenuation not accounted for by these mechanisms (Wahlberg and Larsen 2017). While geometrical spreading is frequency-independent, most of the effects contributing to EA are frequency-dependent and thus alter the spectrum of the emitted sound.

In most bioacoustic scenarios, spherical attenuation applies, and Eq. 5.2 can be reformulated to:

$$ RL= SL-20{\log}_{10}(r)- EA $$
(5.4)

The following sections investigate each of these components of EA.

5.2.3 Sound Absorption in Air

An important and predictable component of EA is attenuation by absorption in air. Absorption refers to the conversion of acoustic energy into heat, mostly due to molecular relaxation of air molecules and the air’s shear viscosity. Absorption loss EAabs is directly proportional to the distance r from the source:

$$ {EA}_{abs}=\alpha r $$
(5.5)

The absorption coefficient α (measured in dB/m) is a complex function of sound frequency, air temperature, relative humidity, and (to a lesser degree) atmospheric pressure (or altitude), in addition to characteristics of oxygen and nitrogen molecules (Attenborough 2007).

For instance, a 2-kHz signal propagating at standard atmospheric pressure (1 atm) and 20 °C is attenuated by about 0.9 dB/100 m, if the relative humidity (r.h.) is 60%, but by about 4.5 dB/100 m at 10% r.h. (Fig. 5.7). Generally, sound attenuation is greater in drier air than in damp, humid air. The effect is especially important at frequencies above 2 kHz. In other words, air acts as a low-pass filter enabling only low-frequency sound to travel over long distances from the source (Attenborough 2007; Wahlberg and Larsen 2017; Larsen and Radford 2018). Consequently, bats use high source levels to overcome the attenuation in air at high frequencies when they echolocate on targets at long distances. This low-pass filter effect is especially visible in the field for broadband sound signals produced by orthopterans and other insects (Römer 1998).

Fig. 5.7
figure 7

Sound absorption coefficients α in air (dB/100 m) at 20 °C versus frequency at four different relative humidities (r.h. %). Based on ISO 9613-1:1993 (International Organization for Standardization. ISO 9613-1:1993, Acoustics—Attenuation of sound during propagation outdoors—Part 1: Calculation of the absorption of sound by the atmosphere. International Organization for Standardization; https://www.iso.org/standard/17426.html; accessed 9 January 2021)

Sound absorption in air varies with time of day and season, mainly due to variations in the relative humidity, which usually peaks in the afternoon (see Larsson 2000; Attenborough 2007). So, if precise values of air absorption are needed in a field experiment, the relative humidity, atmospheric pressure, and air temperature must be measured over time and used in subsequent calculations (Wahlberg and Larsen 2017).

However, at the short distances (<100 m) where most acoustic communication between animals takes place and at frequencies below 10 kHz, the role of absorption in overall propagation loss is likely insignificant compared to other environmental factors. Garcia et al. (2012), for example, described the 40-Hz wing beat signals of drumming ruffed grouse (Bonasa umbellus). Theoretically, these sound signals would be reduced by 6 dB due to air absorption at a distance of 187 km from the drumming bird, whereas spherical spreading loss alone would have reduced the signal amplitudes to a level far below auditory threshold of most animals at a distance of 1 km already (PLsph = 60 dB re 1 m).

5.2.4 Reflection, Scattering, and Diffraction

A second and less predictable component of EA is the attenuation caused by reflection, scattering, and diffraction. As a sound wave hits a hard surface, it is reflected. Reflection can be explained with Huygens’ principle. In Fig. 5.8a, the rooster from Fig. 5.4a is very far away such that the wavefronts at any location appear planar (rather than circular) and the wave rays are parallel (rather than radial). Three incident rays are drawn, hitting the surface (e.g., a road) at times t1, t2, and t3. By Huygens’ principle, each point on the road that is hit acts as the source of a secondary wave. Two secondary wavefronts are shown at time t3. From the time t1, when the first ray hits, to the time t3, the first wavefront has expanded quite a bit. The second wavefront was started at time t2, when the second ray hit, and has expanded less by time t3. The third ray is just starting its secondary wave at time t3, with its secondary wavefront not yet visible. The tangent to the secondary wavefronts at time t3 gives the new wavefront of the reflected wave. The angle of incidence (measured from the normal) is equal to the angle of reflection (also measured from the normal). This is referred to as the law of reflection. It applies to the so-called specular reflection (as from a mirror).

Fig. 5.8
figure 8

(a) Sketch of specular reflection of a plane wave (originating from a far-away rooster) off a hard surface. Wave fronts are shown as green lines; they are perpendicular to the wave rays, shown as black arrows. The three incident rays hit at times t1 − t3 at the locations marked by small suns. Each of these points creates a secondary wave by Huygens’ principle. The secondary wavefronts superpose to yield the new wavefront of the reflected wave, shown at time t3, when the third ray just hits, the second ray has started to grow a secondary wavefront, and the first ray has grown the largest wavefront. The angles of incidence θi are equal to the angles of reflection θr. (b) Sketch of diffuse reflection off a rough surface where the unevenness is great compared to the wavelength of incident sound. While there is a reflected ray in the specular direction, too (indicated by a blue arrow), there are many other directions in which the incident sound is scattered (indicated by red arrows)

Reflection is not always specular but might instead be diffuse. In diffuse reflection, sound is scattered from the surface in all sorts of directions including the specular direction (Fig. 5.8b). This happens when the surface is not smooth but rough. Scattering depends on the ratio of the wavelength of sound to the size of the scatterer. When the sound wavelength is long (i.e., frequency is low) relative to the roughness of the surface, all the sound energy is reflected in the specular direction. When the wavelength is short (i.e., frequency is high) and less than the magnitude of the unevenness of the surface, then sound is scattered in other, non-specular directions. A gravel road, for instance, produces specular reflection at frequencies below 15–20 kHz, but at higher frequencies, where the gravel roughness is large relative to the wavelength, sound is scattered in different directions (Michelsen and Larsen 1983).

Reverberation is a result of multiple reflections and refers to the phenomenon of sound persisting even if the source is turned off. In canyons, caves, or other enclosures, sound bounces off the boundaries again and again. The reverberant sound field is the space that is dominated by reflected sound (as opposed to the field near the source where the direct sound dominates). Once the source is switched off, the reverberant field will continue to exist for some time, yet decay due to absorption by the medium, boundaries (e.g., the walls of a music room), and absorbers in the room (e.g., furniture and people). The more reflective the boundaries, the greater the reverberation.

Reverberation severely alters the structure of the received sound and is one of the least wanted effects in analysis of recorded animal sounds (Fig. 5.9). This type of signal degradation with propagation distance can be quantified by measuring the blur-ratio (see e.g., Dabelsteen et al. 1993). The received sound appears longer in duration than the emitted sound, with the delayed echoes forming a resulting “tail.” This reverberation tail can be quantified as the tail-to-signal ratio (Holland et al. 2001). Consequently, leading edges of sound segments are relatively well-preserved, whereas ending edges are lost in reverberant environments.

Fig. 5.9
figure 9

Spectrogram and envelope of a series of simple blackbird (Turdus merula) calls recorded at two different distances (amplitudes normalized and realigned in time). The spectrogram on top shows higher reverberation due to longer distance from the source than the bottom one. The color scale from white to black is 96 dB in 6-dB bins

Diffraction occurs when a sound wave is partially obstructed. In Fig. 5.10a, a plane wave (perhaps again from a far-away rooster) hits a wall with an opening in the center. The rays that hit the wall are reflected (not drawn). The rays that hit the opening pass straight through. By Huygens’ principle, each point of the opening acts as a source of secondary waves. As the secondary wavefronts expand, they superpose to form new wavefronts that appear to bend behind the wall. This is termed diffraction. It also occurs when the obstruction is finite (Fig. 5.10b).

Fig. 5.10
figure 10

(a) Sketch of diffraction as a sound wave passes through an aperture. Wave rays are indicated by black arrows; wavefronts are indicated by green lines. As the plane wave from a distant rooster hits a wall, each point in the opening acts as a source (indicated by suns) of secondary waves. The secondary waves combine to create the new wavefronts shown at three successive instances in time. The wavefronts appear to bend behind the aperture. (b) Sketch of diffraction as a sound wave passes by a finite obstruction

If the object that is in the path of a propagating sound wave becomes much smaller than a wall (e.g., a bush or maybe just an insect in the air), to the point where the wavelength is much greater (at least by a factor 10) than the size of the object, then the sound wave “ignores” the object and propagates without obstruction. The sound effectively cannot “see” the object; it is too small. In laboratory experiments, bioacousticians should therefore make sure that objects in the sound path from loudspeaker to experimental animal are at least 10 times smaller than the wavelength of the stimulus sound (Larsen 1995). When the wavelength is of the same order of magnitude as the object, or somewhat greater, then diffractive scattering occurs (Bradbury and Vehrencamp 2011). As the name suggests, this is a combination of diffraction and scattering, whereby some sound bends around the object and some sound scatters in all directions, leading to a complicated sound field.

Different surfaces or materials exhibit different degrees of sound reflection, absorption, and transmission. A hard, compact, smooth surface (such as a paved road, ice sheet, cave wall, canyon, subterranean tunnel, burrow wall, or wall of a captive animal’s exhibit) reflects more and absorbs less acoustic energy than a porous, soft surface (such as tree leaves, grassy pastures, or forest canopy). Whether a surface or object is considered rough or smooth and hard or soft depends on the wavelength of the sound. In a mixed deciduous forest, reverberations for frequencies above 4 kHz are stronger with leaves on the trees than without leaves (Wiley and Richards 1982). Reverberations essentially are absent in an open field on a calm day.

5.2.5 Ground Effect

Another component of EA is the so-called ground effect, which is always present in terrestrial sound propagation. The sound signal from a sender (S) located at some height above ground (e.g., a bird at 4 m) will reach a receiver (R; e.g., a recordist’s microphone at 1.5 m) first by the direct path (PD) and a moment later by the indirect and longer path when the signal has been reflected from the ground (PG) (Fig. 5.11a). This results in a range-dependent interference pattern between the sound propagating along PD and PG. The interference pattern has regions of enhanced received level (due to constructive interference) and of attenuated received level (due to destructive interference) at the position of R (Fig. 5.11b). The received sound signal is a distorted version of the emitted signal. It is said to be comb-filtered, as the destructive interference creates the “comb teeth” attenuating some frequencies in the signal, whereas the constructive interference enhances other frequencies of the signal. The magnitude of the ground effect depends on sound frequency, on geometry of the sender-receiver separation distance and height above ground, on the roughness and softness of the ground, and on atmospheric pressure, ambient temperature, relative humidity, and turbulence (see Attenborough et al. 2007). Acoustically hard ground surfaces (such as rock or consolidated sand) produce comb-filter effects over a wide frequency range extending to relatively high frequencies, whereas acoustically soft surfaces (such as grasslands, forest floors, or unpacked snow) mainly generate the ground effect at low frequencies. Recordists may reduce the ground effect by placing microphones as high as practically possible above soft ground. For a general introduction to the phenomenon, see Michelsen and Larsen (1983) or Wahlberg and Larsen (2017). For a comparison between ground effect models and outdoor recordings, see Jensen et al. (2008).

Fig. 5.11
figure 11

Predicted ground effect. (a) Sender 4 m above ground, Receiver 1.5 m above ground, horizontal separation distance 50 m (not to scale). The direct wave PD and the reflected wave PG superpose at R. (b) For frequencies whose wavelengths are in phase, superposition results in level enhancement up to 6 dB; at frequencies with wavelengths out of phase at R, levels are attenuated up to 20–30 dB. Black curve: The curve represents the predicted decibel values that need to be added to the geometric attenuation loss. The ground was modeled as a grass-covered field (flow resistivity 100 kPa s m−2, porosity 30%, layer depth 0.01 m). Red curve: As in the black curve, but more realistic air absorption (at 20 °C, 75% relative humidity, standard atmospheric pressure) and moderate turbulence (mean-squared refractive index of 10−5) were added. Effects of temperature and wind-induced refraction were excluded in the model, which was developed by Keith Attenborough and Shahram Taherzadeh and improved by Kenneth Kragh Jensen

5.2.6 Attenuation by Vegetative Cover

Absorption of sound by vegetation is a component of EA that can further dissipate airborne sounds over distance as acoustic energy is converted to heat in the plant material by viscous friction. The absorption of sound in vegetation depends on the material composition and hardness of the surfaces including the soft ground often found especially in woodland. Leaves absorb more sound energy than a tree trunk; whereas a tree trunk reflects more sound than leaves do. All of this is frequency-dependent.

This component of EA obeys no simple rules and needs to be measured by propagation experiments in the field (e.g., Dabelsteen et al. 1993). Aylor (1972a, b) measured sound propagation loss through various crops, bushes, and trees by broadcasting from a loudspeaker and recording at some distance with a microphone. He found foliage enhanced absorption and scattering. Price et al. (1988) modeled and measured attenuation by vegetation in different forest environments and documented scattering from tree trunks, enhanced ground effect in the presence of mature forest litter, and attenuation by foliage. Foliage attenuation had the greatest effect above 1 kHz and increased almost linearly with the logarithm of frequency. Through mixed coniferous forest, for instance, the attenuation over 24 m varied from about 5 dB at 2 kHz to 10 dB at 4 kHz, which is the range of dominant frequencies in many songbird songs. This foliage attenuation is less than, but needs to be added to, the 28-dB attenuation caused by spherical spreading over the same distance (Eq. 5.2).

Some research on sound propagation through vegetation was motivated by a desire to attenuate anthropogenic noise such as road noise, but generally and most surprisingly dense foliage only accounts for a small amount of attenuation. Martínez-Sala et al. (2006) concluded that a 15-m wide patch of regularly spaced trees could attenuate car noise by at least 6 dB. The effect was similar for more traditional noise barriers. Defrance et al. (2002), for instance, found that a 100-m wide forest strip was effective at providing an acoustical barrier to noise, such as shown in Fig. 5.12, where octave-band sound was broadcast through dense foliage and recorded at different distances in the forest.

Fig. 5.12
figure 12

Attenuation of octave bands of noise (63 Hz to 8000 Hz) after propagating three distances through dense foliage. Data from ISO 9613-2:1996 (International Organization for Standardization. ISO 9613-2:1996, Acoustics—Attenuation of sound during propagation outdoors—Part 2: General method of calculation. International Organization for Standardization; https://www.iso.org/standard/20649.html; accessed 9 January 2021)

At present, vegetation attenuation is not well understood. A much larger database is needed before it is possible to accurately predict the effect of different kinds of vegetation on sound propagation (see Attenborough et al. 2007).

5.2.7 Speed of Sound in Still Air

The speed of sound in still air is affected only by the ambient air temperature and, to a minimal extent, air pressure (or altitude). If the sound propagates under windy conditions, however, the effective speed of sound will be modified by the wind velocity such that the wind velocity of a tailwind will add to the speed of sound and the wind velocity of a headwind will subtract from the speed of sound.

The speed of sound determines the arrival time of a signal from the sender to the receiver and bends a propagating sound wave away from higher air temperature and towards lower air temperature (or from higher wind velocity towards lower wind velocity). The speed of sound in air at 21 °C is 344 m/s. At freezing point, 0 °C, the speed of sound in air is 331 m/s. A good approximation of the speed of sound c in dry air with 0.04% CO2 and temperature Tc (in °C) is:

$$ c=\left(331.45+0.607\ {T}_c\right)\ \mathrm{m}/\mathrm{s} $$
(5.6)

5.2.8 Refraction by Air Temperature Gradients in Still Air

Refraction is the change of the direction of sound propagation due to changes in the speed of sound. In the example of Fig. 5.13a, a plane wave in medium 1 hits an interface with medium 2. Some of the acoustic energy might be reflected (as in Fig. 5.8a, not drawn in Fig. 5.13a), and some of the energy is transmitted. The transmitted wave is refracted, because the speeds of sound differ in the two media. If c1 > c2, then the transmitted wave bends towards the normal (i.e., away from the interface; Fig. 5.13a); if c1 < c2, then the transmitted wave bends away from the normal (i.e., towards the interface; Fig. 5.13b). The angles of incidence and refraction (transmission) are related via Snell’s law (named after Dutch astronomer and mathematician Willebrord Snell):

Fig. 5.13
figure 13

(a) Sketch of refraction at a boundary between medium 1 (high sound speed) and medium 2 (low sound speed). Three rays (black arrows) are shown, hitting the interface at times t1-t3. Each gives rise to secondary waves (by Huygens’ principle) starting at the points marked with small suns. At time t3, the third ray just meets the interface, the second ray has produced a small secondary wave, and the first ray’s secondary wave has grown quite a bit. Drawing the tangent to the secondary waves at time t3 yields the new wavefront (green line) in the second medium. With rays, by definition, being perpendicular to the wavefronts, it can be seen that the rays bend towards the normal in the second medium (θt < θi). Successive wavefronts are drawn to show that they are spaced farther apart in the medium with higher sound speed, and so the wavelength λ is greater in the medium with higher sound speed. (b) Sketch of gradual refraction by a vertical gradient in sound speed. In the illustrated example, c1 < c2 < c3 < c4 < c5

$$ \frac{\sin {\theta}_i}{\sin {\theta}_t}=\frac{c_1}{c_2} $$
(5.7)

Note that, while the frequency of the sound does not change during transmission, the wavelength does change. With c = λf (see Chap. 4, section on the speed of sound), the wavelength is smaller in the medium with lower sound speed.

Refraction of sound waves in air is a common phenomenon due to vertical gradients of air temperature and/or wind velocity. A gradual change in sound speed is illustrated in Fig. 5.13b, where the rays bend more and more upwards as the sound speed increases. In terrestrial environments, the sound source is typically located close to the ground. A sound speed profile that has the speed of sound increase with altitude is downward refracting, while a sound speed profile that has the speed of sound decrease with altitude is upward refracting. Bent propagation paths have the effect that sound appears to arrive from a non-intuitive (i.e., not straight-line) direction. This phenomenon is like an acoustic mirage in analogy to optical mirages, which produce displaced images of far-away objects and which are also caused by refraction (of light).

The EA from refraction may be positive or negative, and so RL may be smaller or greater than predicted without a refracting atmosphere. Air temperature varies throughout the day and creates varying temperature gradients. So, recording at the same location at a different time of day can produce different results. Therefore, taking periodic measurements of the ambient temperature at different heights above the ground can provide the researcher with a notion of whether sound propagation is changing and at what pace.

In still air during daytime, the air is both warmer and more humid close to the ground and a stable air temperature gradient can be established with warmer air near the ground, because of sunlight heating the ground, which warms up much faster than the overlaying air. At higher elevations, the air temperature decreases by 0.01 °C/m (Fig. 5.14a). Sound waves consequently bend away from locations near the ground where the temperature is higher and upwards towards locations with lower temperatures (Fig. 5.14b). Horizontal rays will be directed upwards as will downwards directed rays after bouncing from the ground. Therefore, a certain limiting ray exists that defines a shadow zone around the sound source, where the sound level decreases way faster than predicted from distance alone (Fig. 5.14b). While the shadow zone cannot be reached by a direct path, it may be ensonified by reflection off houses (or other reflectors) in the vicinity and by paths passing through turbulence, and the shadow zone is thus not totally quiet.

Fig. 5.14
figure 14

Sketch of the effects of upward refracting sound speed gradients on outdoor sound propagation. (a) Temperature profile: Air temperature and consequently sound speed increases towards the ground in still air. (b) Ray traces: Sounds from a source (filled circle, here 5 m above ground) are refracted upwards, creating a circular shadow zone close to the ground around the source. Dashed line indicates a sound ray bouncing off the ground. (c) Wind velocity profile: Similar upward refraction is created upwind. Arrows indicate wind direction towards the source (“headwind”) and their length wind speed. Reprinted by permission from Springer Nature. Acoustic Conditions Affecting Sound Communication in Air and Underwater, Larsen and Radford (2018), Fig. 5.5.4. In: H Slabbekoorn, RJ Dooling, AN Popper and RR Fay (eds). Effects of Anthropogenic Noise on Animals, Springer Handbook of Acoustic Research 66, Springer Science and Business Media, LLC, part of Springer Nature: New York, Heidelberg, Dordrecht, London. pp. 109–144. https://doi.org/10.1007/978-1-4939-8574-6_5. © Springer Nature, 2018. All rights reserved

For example, on a sunny day with little wind, the air temperature can be 30 °C at the ground (c = 351 m/s), but at 2–3 m above ground, the temperature may be only 25 °C (c = 347 m/s). This decrease continues up through the atmosphere by 1 °C/100 m, the so-called temperature lapse. With such an air temperature gradient, the sound rays from a sound source located a few meters above ground will bend upwards, because part of the wave closest to the warmer ground will travel the fastest. In a carefully conducted experiment, a combination of upward refraction, strong upwind propagation, and air absorption was measured to reduce the level of propagating sound at a distance of 640 m by up to 20 dB more than predicted from Eq. 5.2 (Attenborough 2007). Perhaps for this reason, birds do not commonly sing in open environments near the ground on sunny days. Rather, they sing in flight well above ground, or from a perch (Wiley 2009).

On calm nights, the opposite air temperature gradient can occur close to ground (called temperature inversion) as it cools faster than the overlaying air. Air temperatures increase up to 50–100 m above ground before decreasing again with altitude. Therefore, sound rays bend downwards and hit the ground (Fig. 5.15). A temperature inversion favors long-distance sound propagation as it leads to higher received levels than predicted by spherical spreading. For this reason, nocturnal communication distances of low-frequency African savanna elephant (Loxodonta africana) sound doubled on the savanna to as much as 10 km (Garstang et al. 1995). In these conditions, sound energy is channeled making spreading losses effectively cylindrical, rather than spherical within the surface layer. Garstang (2010) suggested that a loud infrasonic elephant call during the middle of the day would travel no more than 1 km (i.e., be heard over an area of 3 km2), but an elephant call at night might be heard over an area of 300 km2 (see also, Garstang et al. 1995; Larom et al. 1997). Elephants might adjust timing and abundance of their low-frequency calls and apply them specifically for long-distance communication according to atmospheric conditions.

Fig. 5.15
figure 15

Sketch of the effects of downward refracting sound speed gradients on outdoor sound propagation. (a) Temperature profile: On calm nights, air temperature and consequently sound speed may increase with height above ground until temperature lapse starts. (b) Ray traces: Sounds from a source (filled circle, here 5–10 m above ground) are refracted downwards, creating higher sound levels with distance than predicted from spherical spreading. (c) Wind velocity profile: Similar downward refraction with increased sound levels may be created downwind. Arrows indicate wind direction away from the source (“tailwind”) and their length wind speed. Reprinted by permission from Springer Nature. Acoustic Conditions Affecting Sound Communication in Air and Underwater, Larsen and Radford (2018), Fig. 5.5.5. In: H Slabbekoorn, RJ Dooling, AN Popper and RR Fay (eds). Effects of Anthropogenic Noise on Animals, Springer Handbook of Acoustic Research 66, Springer Science and Business Media, LLC, part of Springer Nature: New York, Heidelberg, Dordrecht, London. pp. 109–144. https://doi.org/10.1007/978-1-4939-8574-6_5. © Springer Nature, 2018. All rights reserved

An air temperature gradient can arise in other locations than just close to ground. Geiger (1965) found the air in and above the forest canopy beginning to warm immediately after sunrise, whereas the air below the canopy was slower to respond. This creates a bilinear sound speed profile with an upward refracting gradient above the canopy and a downward refracting gradient below the canopy. So, for a short period after sunrise, vocalizing birds and, for instance, howler monkeys (Alouatta sp.) located below the canopy can increase the range of their vocalizations relative to later in the day (Wiley and Richards 1978; Wiley 2009).

5.2.9 Refraction by Gradients of Wind Velocity

Strong air temperature gradients cannot exist during strong wind conditions, so the effects of wind velocity on sound propagation in open environments are more influential than air temperature gradients (Attenborough 2007). Wind may cause a shift in sound direction such that the appearance from where the sound is generated differs from where it is actually sent (acoustic mirage). Wind velocity gradients can enhance or impede sound propagation, leading to negative or positive EA. The actual speed of sound is the sum of the air temperature-generated speed of sound and the net wind velocity.

Attenborough et al. (2007) reported the general relationship between the sound speed profile c(z), the air temperature profile T(z), and the wind velocity profile u(z), where z is the height above ground, when the wind blows in the direction of sound propagation (when the wind blows against propagation, −u(z) is added):

$$ c(z)=c(0)\sqrt{\frac{T(z)+273.15}{273.15}}+u(z) $$
(5.8)

Wind velocity is lowest at the ground and increases with altitude (Figs. 5.14c, 5.15c). Sound traveling upwind refracts upwards and sound traveling downwind refracts downward (Fig. 5.14b, Fig. 5.15b). As with temperature gradients, this creates a shadow zone upwind (Fig. 5.14b), where the sound is not heard. Downwind, sounds propagate in a channeled way (Fig. 5.15b) with less loss. Sound attenuates more against the wind than with the wind. Despite this common phenomenon, Wiley (2009) commented that there are no documented cases of animals selectively communicating downwind. But refraction by gradients of wind velocity played a significant role in Civil War battles in the rolling hills of the eastern U.S. There was no radio communication in the nineteenth century, so commanders often depended on what they heard of the battle in front of them to make decisions about troop movements. An acoustic shadow zone existed during the Battle of Gettysburg and commanders could not hear the sounds of battle just 10 miles away, whereas people 150 miles away in Pittsburgh clearly heard the skirmish (Ross 2000).

Sound maps portray the attenuation of sound over distance from a source. The maps take a bird’s-eye view, showing attenuation in 360° about a sound source. Such maps can be produced at a specific receiver altitude, or commonly show maximum received levels over a range of altitudes with the intent of yielding “conservative” estimates of received level. The attenuation pattern radiating from the sound source is typically irregular in shape (rather than concentric) and helps identify environmental conditions that impede or promote sound propagation. Sound mapping tools can commonly utilize data on topography and ground absorption, air temperature, and wind direction and speed. The example in Fig. 5.16 shows how wind attenuated noise from a gunshot upwind but enhanced received levels downwind.

Fig. 5.16
figure 16

Noise map showing the received levels 50 cm above ground of a gunshot fired towards east at a location (small red circle in dark blue area upper left corner) close to a lake (lake contour lines indicated by thin black curves) with varied topography. The color coding indicates iso-dB-curves in 5-dB steps. The dark arrow indicates wind direction and its length corresponds to 300 m on the ground. Note how the wind attenuates the gunshot upwind and enhances it downwind. Noise map calculated by DELTA—a part of FORCE Technology, Hørsholm, Denmark, using Nord2000 software (https://eng.mst.dk/air-noise-waste/noise/traffic-noise/nord2000-nordic-noise-prediction-method/; accessed 23 December 2020). Figure donated by Jesper Madsen, Aarhus University

5.2.10 Attenuation from Air Turbulence

Turbulence refers to unsteady and irregular motion of the air. It is very difficult to model and predict. It may be mechanically or thermally induced. Mechanical turbulence is caused by friction, for example, when air moves over rough ground or past obstacles such as houses and trees. Friction causes eddies and thus turbulence. This turbulence is stronger in higher wind speeds and rougher terrain. Turbulence is particularly great during fall winds, which shoot down the slope of a mountain. Thermal turbulence is created when the sun heats the ground unevenly. For example, bare ground warms up faster than fields with vegetative cover or bodies of water. Convective air currents are established with warm and less dense air rising and cold and denser air sinking. These currents, in turn, may generate eddies. Eddies may extend from the ground to a few hundred meters height. They can be of various sizes (height and diameter) and larger eddies may break up into smaller ones. Because of air temperature, gradients and wind, air is always in motion and this motion may always generate turbulence.

Turbulence causes EA, which increases with distance from the source, with the level of turbulence, and with sound frequency (see red curve in Fig. 5.11b). EA is typically highest during daytime and on hot sunny days. A characteristic of turbulence on sound propagation is that received levels at a fixed location quickly fluctuate with time and, at some range, this fluctuation stabilizes at a standard deviation of about 6 dB (Daigle et al. 1983). Van Staaden and Römer (1997), for instance, reported that at night, the sound pressure level of the song of an African bladder grasshopper (Bullacris intermedia) over open grassland was reduced with distance very close to the expected 6-dB per doubling of distance of spherical attenuation. However, during daytime, the attenuation was much larger and more variable due to air turbulence.

For more in-depth reading on outdoor sound propagation, please see Attenborough et al. (2007), Attenborough et al. (2007), Larsen and Wahlberg (2017), Wahlberg and Larsen (2017), or Larsen and Radford (2018).

5.3 The Source-Path-Receiver Model for Animal Acoustic Communication

The SPRM can be used to examine acoustic communication among animals. In the example of Fig. 5.17, two gentoo penguins (Pygoscelis papua) are communicating within their nesting colony in Antarctica. The sender (i.e., the source) emits a penguin display call. The call spreads through the habitat, experiencing various forms of attenuation. The receiver is another gentoo penguin. It might respond acoustically and thus become the next sender. Whether this two-way acoustic communication is successful, depends on a number of parameters.

Fig. 5.17
figure 17

Example of the SPRM for animal acoustic communication. The source is a gentoo penguin emitting its display call within its nesting colony in Antarctica. The sound propagation path takes the call through the local habitat. The receiver is another gentoo penguin in a neighboring colony who might respond acoustically, thereby becoming the next source. The parameters that affect successful communication are listed below the source and the receiver. Along the path, the call experiences various propagation effects leading to attenuation. Ambient noise in the habitat stems from waves, wind, and ice (abiotic), other penguins (biotic), and perhaps humans (anthropogenic). Ambient noise at the receiver reduces the signal-to-noise ratio and hence the detectability of the call. Ambient noise at the source may lead to increases in source level and repetition (redundancy) and shifts in spectral content (Lombard effect)

The locations of sender and receiver matter; the closer together they are, the better the communication—most likely. If the source emission pattern is directional rather than omnidirectional (i.e., the call can be emitted in a specific direction), then the orientation of the sender towards the receiver matters. Similarly, if the receiver’s hearing is directional, then the receiver’s orientation affects communication success. A stronger source level will increase the likelihood of successful reception, unless the environment is highly reverberant, in which case the echoes would also be louder and potentially interfere with communication success. The frequency content of the call matters, because different frequencies propagate differently, and the hearing abilities of the receiver are frequency-dependent.

Along the path, some of the call energy is lost due to geometrical spreading and some is absorbed by the air, snow, and soil. The direction of propagation changes due to reflection and scattering off rocks, and due to refraction by sound speed gradients in air. Diffraction around mountains might play a role over longer ranges. Ambient noise in the environment does not affect sound propagation; i.e., it neither leads to attenuation nor changes the direction of propagation.

Ambient noise in the environment affects whether the call is received and correctly interpreted. Ambient noise can be of abiotic, biotic, or anthropogenic origin. Wind causes noise, as do waves and breaking ice. The other penguins in the colony create ambient noise with their own acoustic communications. Human presence (e.g., chatting tourists stomping through the snow towards the penguin colony) might add to the ambient noise. Ambient noise at the location of the receiver lowers the signal-to-noise ratio (SNR) at which the call is received. The critical ratios (specific to the receiver’s auditory system; see Chap. 10) dictate, below which SNR the call is masked by the ambient noise and thus not detected. At intermediate SNRs, the call might be detected, but not correctly interpreted. Masking-release processes (also specific to the receiver’s auditory system) include comodulation masking release and spatial release from masking (e.g., Erbe et al. 2016) and aid signal detection and interpretation. Ambient noise at the sender may lead to the Lombard effect (Lombard 1911), whereby the sender raises the source level of its call, actively changes the spectral characteristics to move sound energy out of the frequency band most at risk from masking, and repeats the call to increase the likelihood of reception. Finally, ambient noise may instill anti-masking strategies in both sender and receiver whereby they change their location and orientation (both towards each other) to foster communication success.

5.3.1 The Sender

In animal acoustic communication, the signal that is being sent depends on the sender’s species, demographic parameters, behavioral state, and many other factors. Obviously, different taxonomic groups produce different sounds, ranging from infrasonic rumbles of elephants to ultrasonic clicks of bats (see Chap. 8 on classifying animal sounds). But even closely-related species may be told apart acoustically. For example, Gerhardt (1991) found that the number of pulses in the advertisement call in male Eastern gray treefrogs (Dryophytes versicolor) and Cope’s gray treefrogs (Dryophytes chrysoscelis) is the major cue distinguishing sympatric males who are similar in size and color. While species-specific calls of bats have been recognized for decades (Balcombe and Fenton 1988; Fenton and Bell 1981; O’Farrell et al. 1999), more recently, acoustic differences have been noted in bat species that are difficult to tell apart morphologically (Gannon et al. 2001; Gannon et al. 2003; Gannon and Racz 2006). The more we record and document species’ repertoires, the more successful bioacousticians will become at identifying the sender’s species.

Within the same species, populations living in different geographic regions and habitats may exhibit differences in their sounds, as demonstrated for Italian vs. English tawny owls (Strix aluco; Galeotti et al. 1996), pikas (Ochotona spp.; Trefry and Hik 2010), and chimpanzees (Pan troglodytes schweinfurthii; Mitani et al. 1992). Animals can tell conspecifics from a different region or population apart. Auditory neighbor-stranger discrimination has been demonstrated, for instance, in concave-eared torrent frogs (Odorrana tormota; Feng et al. 2009) and alder flycatchers (Empidonax alnorum; Lovell and Lein 2004), where territory holders respond less aggressively towards played-back neighbor songs than to those of strangers, the “dear enemy effect.”

Not just population identity, but even individual identity may be encoded in the outgoing signal; for example, in oilbirds (Steatornis caripensis; Suthers 1994), banded mongoose (Mungos mungo; Fig. 5.18; Jansen et al. 2012), and in fallow deer (Dama dama; Vannoni and McEligott 2007). Galeotti and Pavan (1991) studied an urban population of non-songbirds, tawny owls, in Pavia, Italy, and demonstrated that the males’ territorial hoots have a clear species-specific structure with individual variations mainly in the final note of the call. Bats use individualized calls as they aggregate. For example, Melendez and Feng (2010) determined that communication calls of little brown bats (Myotis lucifugus) were individually distinct in minimum and maximum frequency, and call duration. Individual pallid bats (Antrozous pallidus) emitted unique calls below the frequency of their echolocation clicks and in the presence of other bats (Arnold and Wilkinson 2011). Wilkinson and Boughman (1998) provided evidence that the greater spear-nosed bat (Phyllostomus hastatus) used individual social calls to coordinate feeding on clumped nectar and fruit resources. Colonial animals, such as penguins, gulls, pinnipeds, and bats especially rely on individual acoustic recognition between a mother and offspring. These mothers often leave their young in a colony while they forage, so proper recognition of their own young upon return is important to fitness. Especially in birds without nests and physical landmarks such as king penguins (Aptenodytes patagonicus), acoustic recognition between parents and chicks becomes critical (Aubin and Jouventin 2002; Searby et al. 2004).

Fig. 5.18
figure 18

Spectrograms of close calls of three banded mongoose (two females and one male; top to bottom) during a. digging, b. searching, and c. moving between foraging sites. Black arrows point to the individually stable foundation of each call. Dashed arrows point to the harmonic extension, the duration of which was correlated with behavior (Jansen et al. 2012). © Jansen et al.; https://link.springer.com/article/10.1186/1741-7007-10-97. Published under a Creative Commons Attribution License; https://creativecommons.org/licenses/by/2.0/

As organisms grow, their physical dimensions and size of their sound-producing organs become larger. Generally, emitted sounds transition from high-frequency, low-amplitude sounds to low-frequency, high-amplitude sounds (Hardouin et al. 2014). It is partly a consequence of the simple physiology that animals cannot efficiently emit sounds with wavelengths longer than the dimensions of their sound-emitting organs (e.g., see Michelsen 1992; Genevois and Bretagnolle 1994; Fletcher 2004, and Larsen and Wahlberg 2017). For instance, Charlton et al. (2011) reported that increased body size in male koalas (Phascolarctos cinereus) was reflected in the closer spacing of vocalization formants. (Formants refer to a concentration of acoustic energy around particular frequencies caused by resonances in the vocal tract.) Stoeger-Horwath et al. (2007) reported age-dependent variations in the grunt and trumpet calls of African savanna elephants. The grunts were only recorded in individuals less than 2 months of age and infants never produced trumpet calls until they were 3 months old. The authors also reported age-dependent variations in the low-frequency rumble; older individuals rumbled at a lower fundamental frequency than younger individuals, and there also was a tendency for rumble duration to increase slightly with age. Weddell seal (Leptonychotes weddellii) pups on rookeries emit high-frequency calls that transition into low-frequency adult calls used exclusively while hauled-out on the ice (Thomas and Kuechle 1982). Reby and McComb (2003) reported that lower-frequency male roars in red deer (Cervus elaphus) stags were associated with greater age and weight, so provided “honest” cues about reproductive condition.

In many species, sex-specific differences in the acoustic repertoires are employed to insure proper mate selection (Hardouin et al. 2014). The sender’s reproductive state and drive for mating often is represented in its acoustic signals. In songbirds and many orthopteran insects, only males sing (Miller et al. 2007; Riede et al. 2010). Songs are under the influence of reproductive hormones associated with courtship, and songbird songs are long, complex, and repeated in a typical and recognizable sequence of sounds. In species in which males compete acoustically to attract a female mate, a substandard mating call could indicate immaturity, agedness, or poor health of the caller. For example, Hardouin et al. (2007) examined hoots by 17 male scops owls (Otus scops) on the Isle of Oléron, France. Heavier male owls made lower-frequency hoots, which could give them a competitive mating advantage over lighter weight males.

Context further determines acoustic signaling. For example, predators often hunt quietly, and prey remain silent when it is aware of being stalked. A classic case where (prey) moths attempt to jam (predator) bat echolocation signals with a counter signal to confuse the approaching predator has developed another twist. Ter Hofstede and Ratcliffe (2016) found that, “specific predator counter-adaptations include calling at frequencies outside the sensitivity range of most eared prey, changing the pattern and frequency of echolocation calls during prey pursuit, and quiet, or ‘stealth,’ echolocation.” Acoustic interactions between a parent and offspring are often brief and relatively quiet to conceal and protect the young. In contrast, messages with a high reproductive value, such as mating calls or territorial defense calls, and calls with high survival value, such as infant distress calls or adult alarm calls, are produced loudly and repeatedly. To this point, it has been shown that distress calls of three species of pipistrelle bats (Pipistrellus nathusii, P. pipistrellus, and P. pygmaeus) were structurally convergent, “consisting of a series of downward-sweeping, frequency-modulated elements of short duration and high intensity with a relatively strong harmonic content” (Russ et al. 2004). The study suggested that it was not as important to have species-specific signals as it was to have some device that produced a mobbing by bats of the predator regardless of species of bat.

Ambient noise at the location of the sender may also affect signal emission level, repetition, and spectral shifts (collectively called the Lombard effect; Brumm and Zollinger 2011). For instance, male túngara frogs (Engystomops pustulosus) increased the level, repetition, and complexity of their calls when noise overlapped with their normal frequency band of calling but not when noise was higher and non-overlapping in frequency (Halfwerk et al. 2016). Brumm (2004) and Brumm and Todt (2003) noted that birds in a noisy environment called louder and more often, and repositioned themselves, possibly to increase the likelihood of the sound being received. Similarly, greater horseshoe bats (Rhinolophus ferrumequinum) increased their call level and shifted frequency in noisy environments (Hage et al. 2013). Eliades and Wang (2012) examined the neural processes underlying the Lombard effect in marmoset monkeys (Callithrix jacchus) and found that increased vocal intensity was accompanied by a change in auditory cortex activity toward neural response patterns observed during vocalizations under normal feedback conditions.

Many animal communication calls are close to being omnidirectional, radiating equally in all directions—at least at their lower frequencies (Larsen and Dabelsteen 1990). However, some bird species (e.g., juncos, warblers, and finches) showed an ability to focus their calls in the direction of an owl to warn-off the predator. Yorzinski and Patricelli (2009) examined the acoustic directionality of antipredator calls of 10 species of passerines and found that some birds would “call out of the side of their beaks” with their head pointed away from conspecifics in an apparent attempt at ventriloquist behavior. Whether terrestrial animals can actively change the sound emission directivity in response to noise (in order to enhance acoustic communication) needs to be investigated.

5.3.2 The Path and the Acoustic Environment

As the signal leaves the sender and travels through the environment, it is subjected to various forms of attenuation (as detailed above) and so the level at the receiver location is less than the source level. In addition, ambient noise at the receiver location reduces the SNR, making it harder for the receiver to detect the signal. Ambient noise may be classed according to its sources: abiotic, biotic, or anthropogenic. Chapter 7 provides a detailed overview of ambient noise with example spectrograms.

In terms of abiotic ambient noise, wind is a major contributor and its noise level increases with wind speed. In addition, remember that the direction of wind (i.e., upwind or downwind) affects the distance that sounds propagate. Wind drives other types of noise, such as noise from vegetation moving in the wind. Even without wind, there may be noise from branches creaking and breaking in the heat or noise from rustling leaves in the understory as animals walk through. Wind also drives waves; surf noise or noise from breaking waves is typical for coastal areas. Even without wind, moving water, such as waterfalls, can be noisy. Precipitation (i.e., rain, hail, thunder, and lightning) creates noise. Geological events such as earthquakes, seismic rumblings, and volcanic eruptions contribute noise to the terrestrial soundscape. In polar regions, melting ice and calving glaciers contribute to ambient noise.

Biotic ambient noise comes from animals in the environment. These can be of the same or different species from the target species. Several taxa call in large numbers at certain times of day and season, significantly raising ambient noise levels (e.g., chorusing cicadas, katydids, or frogs). Biologists typically think of soniferous animals as calling with specialized anatomies for sound production (i.e., syringes in birds and vocal cords in mammals). However, most animals also can produce mechanical sounds using external anatomies, such as wing-stridulation by a locust, abdomen vibration by a spider, beak-pecking by a woodpecker, teeth-chattering by a squirrel, foot-thumping by a rabbit, etc. In addition, animals can produce unintentional sounds, such as noise associated with rustling leaves as an animal walks through a forest, respiration noise, flight noise, feeding sounds, etc., not intended for communication with a conspecific. Example spectrograms for many of these sounds are found in Chap. 7 on soundscapes as well as Chap. 8 on detecting and classifying animal sounds.

Anthropogenic ambient noise is due to aircraft, road traffic, trains, ships, military activities, construction activities, etc. Increasing encroachment of human activities on animal habitats results in increased noise exposure for all taxa of animals (see Chap. 13 on noise impacts).

Ambient noise varies with time on scales of hours, days, lunar phase, season, and year. The reason is a combination of sound propagation effects and source behavior. The time of day and season of year affect sound propagation. As explained above, sounds can be heard from farther away during the night; for example, a train can be heard in the distance at night, but not during the day. Walking in the woods during the winter, the listener can hear sounds over much greater distances than during the summer with thick vegetation. In many animals, sound-production rates are highest during the breeding season. Chorusing insects, amphibians, and birds precisely time the commencement of their cacophonies to a breeding season each year. Amphibians stop calling when they go into winter hibernation, so chorusing can stop abruptly in late autumn. Some birds migrate, so their songs are missing from the winter soundscape. Many migrating birds are soniferous and their flight calls can temporarily dominate the soundscape as they pass through an area during a spring migration (e.g., a honking flock of migrating geese or a chirping flock of starlings). Yet, other species of birds remain in temperate areas over winter and produce sounds all year long (e.g., cardinals, sparrows, and snow juncos). Tropical insects, frogs, and birds can reproduce multiple times per year, they do not migrate or hibernate, and so are soniferous throughout the year. Diurnal cycles exist in all animals with birds calling in the morning, insects in the afternoon, frogs in the evening, and nocturnal animals in the middle of the night.

5.3.3 The Receiver

The same factors that can affect the sender also could affect the receiver’s ability to detect and interpret a signal (i.e., species, population, individual traits, age, sex, context, and ambient noise). On the species level, different species typically hear sound at different frequencies and levels. In other words, audiograms are species-specific (Fig. 5.19). Fortunately, data on hearing abilities of invertebrates, insects, reptiles, amphibians, fish, birds, and mammals continue to accumulate (see Volume 2). Nonetheless, there is some intra-species and individual variability in hearing (see Chap. 10).

Fig. 5.19
figure 19

Hearing ranges of some animals and humans. Bars represent the approximate hearing frequency range, ordered after increasing upper frequency cut-off; blue: fish, gray: bird, green: frog, orange: terrestrial mammal, violet: human, and brown: marine mammal. The red vertical lines are the frequencies of musical notes C0–C16, for comparison. There is one octave between successive C-notes. Middle-C on a piano is C4. A full-sized piano will only range from just under C1 to C8, with tones >C11 being ultrasound. Data from Fay (1988), Fay and Popper (1994), Heffner (1983), Heffner and Heffner (2007), Lipman and Grassi (1942), Warfield (1973), and West (1985), previously compiled by Vanderbilt University and Louisiana State University (http://lsu.edu/deafness/HearingRange.html; accessed 6 January 2021), and plotted by Wikimedia Commons author Cmglee. https://commons.wikimedia.org/wiki/File:Animal_hearing_frequency_range.svg. Figure licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license; https://creativecommons.org/licenses/by-sa/3.0/deed.en

In American mink (Neovison vison), for instance, hearing-sensitivity and frequency range changed markedly with postnatal age. Pups up to 32 days old were almost deaf, whereas three weeks later, their audiogram started to resemble that of an adult (in shape), but they remained less sensitive than adults, especially below 10 kHz (Brandt et al. 2013). There might be good reasons why hearing in young is immature. For example, a male fruit fly (Drosophila melanogaster) cannot hear the female’s flight tone until he is physically mature enough to mate (Eberl and Kernan 2011). This ensures the female fruit fly that any pursuing male is mature. Hearing capabilities further change over an adult’s life. Natural deterioration with age due to anatomical and physiological aging is a process called presbycusis. Hearing loss can also be caused by acute noise exposure at strong levels and chronic exposure to moderate noise (see Chap. 13). Hearing loss likely affects the ability of a receiver to hear and interpret a sender’s message. For example, a hearing-impaired moth, which typically avoids a bat predator through an evasive flight pattern, will be easier to capture if the bat’s echolocation signals are not heard.

The receiver’s sex rarely influences its hearing capabilities; however, Narins and Capranica (1976, 1980) provided an example of sex differences in the auditory reception system of a Puerto Rican treefrog, the coquina frog (Eleutherodactylus coqui). Male and female treefrogs responded to different notes of the male’s two-note, co-qui call. Females were attracted to the qui-part of the call. Males paid most attention to the co-part of the call, which was important in male–male aggressive interactions. The authors found that the inner ear basilar papilla was tuned differently in males and females; males had fewer fibers tuned to the qui-part of the call and females had fewer fibers tuned to the co-part of the call. These differences also occurred in higher-order neurons in the brain, where response decisions take place. Later studies (Mason et al. 2003) showed similar sexual differences in the middle ear of bullfrogs (Lithobates catesbeianus).

Ambient noise is a ubiquitous factor influencing signal reception and interpretation. Having experienced various forms of attenuation along its path, a signal will be audible if its amplitude remains above the power spectral density level of the ambient noise plus the critical ratio of the receiver. The critical ratio is essentially a minimum SNR needed for signal detection (see Chap. 10 for more information on the critical ratio). An even higher SNR is needed for signal discrimination, recognition, and finally, comfortable communication (Fig. 5.20; Lohr et al. 2003; Dooling et al. 2009; Dooling and Blumenrath 2013; Dooling and Leek 2018). Some birds take advantage of these limitations by producing both high-amplitude broadcast sounds and low-amplitude soft sounds. The former become public since they cover a large active space with many potential receivers whereas the latter become private as they cover a very small active space with only few receivers (Larsen 2020).

Fig. 5.20
figure 20

Sketch of the radii about a calling bird over which a broadcast public call might be detected, discriminated, and recognized. Detection (i.e., signal presence/absence) is possible over the longest ranges (i.e., lowest SNR). A higher SNR is needed for signal discrimination, then signal recognition, and finally, comfortable communication, yielding progressively shorter ranges. In louder ambient noise, the ranges will be even less. For animals with soft private calls or greater critical ratios, the radii will also be less (Erbe et al. 2016). © Erbe et al.; https://doi.org/10.1016/j.marpolbul.2015.12.007. Licensed under CC BY 4.0; https://creativecommons.org/licenses/by/4.0/

The auditory systems of some animals have built-in masking-release processes to reduce the impact of ambient noise. A spatial release from masking results from the directional hearing capabilities of the animal. If the signal arrives from a direction in which the receiver is more sensitive and if the noise arrives from a direction in which the receiver is less sensitive, then the reception directivity improves the SNR and the signal can be detected in higher ambient noise. A spatial release from masking has been demonstrated in several taxa including tropical crickets (Paroecanthus podagrosus and Diatrypa sp.; Schmidt and Römer 2011), gray treefrogs (Bee 2008), budgerigars (Melopsittacus undulatus; Dent et al. 1997), and pigmented Guinea pigs (Cavia porcellus; Greene et al. 2018). A comodulation masking release is possible if the noise is broadband and amplitude-modulated coherently across its frequencies. The animal might then utilize information about the noise from frequencies outside of the signal frequency to filter the noise within the frequency band of the signal. A comodulation masking release has been demonstrated in gray treefrogs (Bee and Vélez 2018), European starling (Sturnus vulgaris; Klump and Langemann 1995), and house mice (Mus musculus; Klink et al. 2010). Addionally, animals have a host of behavioral adaptations to optimize sound reception. For example, an animal may improve the SNR for sound arriving at its ears by approaching the source, tilting its head, adjusting its pinnae (in the case of mammals), or moving to another location away from a noise source (Nelson and Suthers 2004).

5.4 Summary

The Source-Path-Receiver Model (SPRM) is used widely in technical noise control and illustrates the importance of exploring a signal at all points between the source and receiver and of understanding factors that affect the observations. This chapter developed the SPRM for the example of animal acoustic communication (also see Chap. 11). The influences of the sender’s and receiver’s species, age, sex, individual identity, and behavioral status were discussed. The receiving animal’s hearing ability is a major factor for communication success.

Terminology related to sound propagation (or the path) was defined and basic concepts of outdoor sound propagation were developed, supported with simple equations. Several factors play an important role in sound propagation: distance between sender and receiver, air temperature, wind (direction and speed), obstacles along the path, and ground cover. The concepts of source level, received level, sound absorption, reflection, scattering, reverberation, diffraction, refraction, acoustic shadows, acoustic mirages, air temperature gradients, and wind speed gradients were illustrated. Two types of geometric spreading (i.e., spherical and cylindrical) were applied. Examples for ray tracing were provided. Ambient noise (including its abiotic, biotic, and anthropogenic sources) in terrestrial environments and its influence on both sender and receiver was discussed.

The SPRM may be applied to many other bioacoustic scenarios or studies such as animal biosonar (where the sender and receiver are the same individual; see Chap. 12) or the effects of noise on animals (where the source might be a highway; see Chap. 13). It would also be useful to consider passive acoustic monitoring (of animals or soundscapes) within the framework of the SPRM to understand the sound sources recorded, the way the environment affects the recorded soundscape, and the effects (and potential artifacts) of the recording system (i.e., the receiver; see Chaps. 2 and 7). The SPRM might also guide the bioacoustician in setting up audiometric experiments (where the source is an engineered signal; see Chap. 10). The SPRM is a fundamental concept helpful in bioacoustic study design and interpretation.

5.5 Additional Resources

The following sites were last accessed 3 February 2021.