Computational Bioacoustic Scene Analysis

Chapter

Abstract

The analysis of natural and animal sound makes a demonstrable contribution to important challenges in conservation, animal behaviour, and evolution. And now bioacoustics has entered its big data era. Thus automation is important, as is scalability in many cases to very large amounts of audio data and to real-time processing. This chapter will focus on the data science and the computational methods that can enable this. Computational bioacoustics has some commonalities with wider audio scene analysis, as well as with speech processing and other disciplines. However, the tasks required and the specific characteristics of bioacoustic data require new and adapted techniques. This chapter will survey the tasks and the methods of computational bioacoustics, and will place particular emphasis on existing work and future prospects which address scalable analysis. We will mostly focus on airborne sound; there has also been much work on freshwater and marine bioacoustics, and a small amount on solid-borne sounds.

Keywords

Animal communication Vocalisation Ecoacoustics Bioacoustics Bird Sound similarity Species identification Automatic species recognition Natural sound Soundscape Acoustic monitoring Passive acoustic monitoring Animal calls Vocal sequences 

References

  1. 1.
    Abe, K., Watanabe, D.: Songbirds possess the spontaneous ability to discriminate syntactic rules. Nat. Neurosci. 14, 1067–1074 (2011). doi:10.1038/nn.2869CrossRefGoogle Scholar
  2. 2.
    Aide, T.M., Corrada-Bravo, C., Campos-Cerqueira, M., Milan, C., Vega, G., Alvarez, R.: Real-time bioacoustics monitoring and automated species identification. PeerJ 1, e103 (2013). doi:10.7717/peerj.103CrossRefGoogle Scholar
  3. 3.
    Aihara, I., Mizumoto, T., Awano, H., Okuno, H.G.: Call alternation between specific pairs of male frogs revealed by a sound-imaging method in their natural habitat. In: Interspeech 2016. International Speech Communication Association (2016). doi:10.21437/interspeech.2016-336Google Scholar
  4. 4.
    Anguera, X., Wooters, C., Hernando, J.: Acoustic beamforming for speaker diarization of meetings. IEEE Trans. Audio Speech Lang. Process. 15(7), 2011–2022 (2007). doi:10.1109/TASL.2007.902460CrossRefGoogle Scholar
  5. 5.
    Balestriero, R., et al.: Scattering decomposition for massive signal classification: from theory to fast algorithm and implementation with validation on international bioacoustic benchmark. In: 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 753–761. IEEE, New York (2015)Google Scholar
  6. 6.
    Blumstein, D.T., Mennill, D.J., Clemins, P., Girod, L., Yao, K., Patricelli, G., Deppe, J.L., Krakauer, A.H., Clark, C., Cortopassi, K.A., Hanser, S.F., McCowan, B., Ali, A.M., Kirschel, A.N.G.: Acoustic monitoring in terrestrial environments using microphone arrays: applications, technological considerations and prospectus. J. Appl. Ecol. 48(3), 758–767 (2011). doi:10.1111/j.1365-2664.2011.01993.x. http://dx.doi.org/10.1111/j.1365-2664.2011.01993.x CrossRefGoogle Scholar
  7. 7.
    Boersma, H.F.: Characterization of the natural ambient sound environment: Measurements in open agricultural grassland. J. Acoust. Soc. Am. 101, 2104 (1997). doi:10.1121/1.418141CrossRefGoogle Scholar
  8. 8.
    Borker, A.L., McKown, M.W., Ackerman, J.T., Eagles-Smith, C.A., Tershy, B.R., Croll, D.A.: Vocal activity as a low cost and scalable index of seabird colony size. Conserv. Biol. (2014). doi:10.1111/cobi.12264Google Scholar
  9. 9.
    Briefer, E., McElligott, A.G.: Indicators of age, body size and sex in goat kid calls revealed using the source-filter theory. Appl. Anim. Behav. Sci. 133, 175–185 (2011). doi:10.1016/j.applanim.2011.05.012CrossRefGoogle Scholar
  10. 10.
    Briggs, F., Raich, R., Fern, X.Z.: Audio classification of bird species: a statistical manifold approach. In: Proceedings of the Ninth IEEE International Conference on Data Mining, pp. 51–60 (2009). doi:10.1109/ICDM.2009.65Google Scholar
  11. 11.
    Buscaino, G., Ceraulo, M., Pieretti, N., Corrias, V., Farina, A., Filiciotto, F., Maccarrone, V., Grammauta, R., Caruso, F., Giuseppe, A., et al.: Temporal patterns in the soundscape of the shallow waters of a Mediterranean marine protected area. Sci. Rep. 6 (2016). doi:10.1038/srep34230Google Scholar
  12. 12.
    Buxton, R.T., Jones, I.L.: Measuring nocturnal seabird activity and status using acoustic recording devices: applications for island restoration. J. Field Ornithol. 83(1), 47–60 (2012). doi:10.1111/j.1557-9263.2011.00355.xCrossRefGoogle Scholar
  13. 13.
    Casey, M.A., Slaney, M.: Song intersection by approximate nearest neighbor search. In: Proceedings of the International Symposium on Music Information Retrieval (ISMIR), vol. 6, pp. 144–149 (2006)Google Scholar
  14. 14.
    Chen, Z., Maher, R.C.: Semi-automatic classification of bird vocalizations using spectral peak tracks. J. Acoust. Soc. Am. 120(5), 2974–2984 (2006). doi:10.1121/1.2345831CrossRefGoogle Scholar
  15. 15.
    Coates, A., Ng, A.Y.: Learning feature representations with k-means. In: Montavon, G., Orr, G.B., Muller, K.R. (eds.) Neural Networks: Tricks of the Trade, pp. 561–580. Springer, New York (2012). doi:10.1007/978-3-642-35289-8_30 CrossRefGoogle Scholar
  16. 16.
    Colonna, J.G., Cristo, M., Salvatierra, M., Nakamura, E.F.: An incremental technique for real-time bioacoustic signal segmentation. Expert Syst. Appl. (2015). doi:10.1016/j.eswa.2015.05.030Google Scholar
  17. 17.
    Dawson, D.K., Efford, M.G.: Bird population density estimated from acoustic signals. J. Appl. Ecol. 46(6), 1201–1209 (2009). doi:10.1111/j.1365-2664.2009.01731.xCrossRefGoogle Scholar
  18. 18.
    Dixon, S.: Onset detection revisited. In: Proceedings of the International Conference on Digital Audio Effects (DAFx-06), Montreal, Quebec, pp. 133–137 (2006)Google Scholar
  19. 19.
    Duan, S., Zhang, J., Roe, P., Wimmer, J., Dong, X., Truskinger, A., Towsey, M.: Timed probabilistic automaton: a bridge between Raven and Song Scope for automatic species recognition. In: Proceedings of the Twenty-Fifth Innovative Applications of Artificial Intelligence Conference, pp. 1519–1524. AAAI, Palo Alto (2013)Google Scholar
  20. 20.
    Elie, J.E., Theunissen, F.E.: The vocal repertoire of the domesticated zebra finch: a data-driven approach to decipher the information-bearing acoustic features of communication signals. Anim. Cogn. 1–31 (2015). doi:10.1007/s10071-015-0933-6Google Scholar
  21. 21.
    Engesser, S., Crane, J.M., Savage, J.L., Russell, A.F., Townsend, S.W.: Experimental evidence for phonemic contrasts in a nonhuman vocal system. PLoS Biol. 13(6), e1002171 (2015). doi:10.1371/journal.pbio.1002171CrossRefGoogle Scholar
  22. 22.
    Fagerlund, S.: Bird species recognition using support vector machines. EURASIP J. Appl. Signal Process. 38637 (2007). doi:10.1155/2007/38637Google Scholar
  23. 23.
    Farina, A., Pieretti, N., Piccioli, L.: The soundscape methodology for long-term bird monitoring: a Mediterranean Europe case-study. Ecol. Inform. 6(6), 354–363 (2011). doi:10.1016/j.ecoinf.2011.07.004CrossRefGoogle Scholar
  24. 24.
    Gasc, A., Sueur, J., Jiguet, F., Devictor, V., Grandcolas, P., Burrow, C., Depraetere, M., Pavoine, S.: Assessing biodiversity with sound: do acoustic diversity indices reflect phylogenetic and functional diversities of bird communities? Ecol. Indic. 25, 279–287 (2013). doi:10.1016/j.ecolind.2012.10.009CrossRefGoogle Scholar
  25. 25.
    Gill, L.F., Goymann, W., Ter Maat, A., Gahr, M.: Patterns of call communication between group-housed zebra finches change during the breeding cycle. eLife 4 (2015). doi:10.7554/eLife.07770Google Scholar
  26. 26.
    Gillespie, D., Mellinger, D.K., Gordon, J., McLaren, D., Redmond, P., McHugh, R., Trinder, P., Deng, X.Y., Thode, A.: PAMGUARD: semiautomated, open source software for real-time acoustic detection and localization of cetaceans. J. Acoust. Soc. Am. 125(4), 2547–2547 (2009). doi:10.1121/1.4808713. http://dx.doi.org/10.1121/1.4808713 CrossRefGoogle Scholar
  27. 27.
    Goëau, H., Glotin, H., Vellinga, W.P., Planqué, R., Joly, A.: LifeCLEF bird identification task 2016: the arrival of deep learning. In: Working Notes of CLEF 2016-Conference and Labs of the Evaluation forum, Évora, Portugal, 5–8 September, 2016, pp. 440–449 (2016)Google Scholar
  28. 28.
    Härma, A., Somervuo, P.: Classification of the harmonic structure in bird vocalization. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP’04), vol. 5, pp. 701–704 (2004). doi:10.1109/ICASSP.2004.1327207Google Scholar
  29. 29.
    Indyk, P., Motwani, R.: Approximate nearest neighbors: towards removing the curse of dimensionality. In: Proceedings of the thirtieth annual ACM symposium on Theory of computing, pp. 604–613. ACM, New York (1998)Google Scholar
  30. 30.
    Jancovic, P., Kokuer, M.: Acoustic recognition of multiple bird species based on penalised maximum likelihood. IEEE Signal Process. Lett. 1–1 (2015). doi:10.1109/lsp.2015.2409173. http://dx.doi.org/10.1109/LSP.2015.2409173
  31. 31.
    Johansson, A.T., White, P.R.: An adaptive filter-based method for robust, automatic detection and frequency estimation of whistles. J. Acoust. Soc. Am. 130(2), 893–903 (2011). doi:10.1121/1.3609117CrossRefGoogle Scholar
  32. 32.
    Kershenbaum, A., Blumstein, D.T., Roch, M.A., Akçay, Ç.A., Backus, G., Bee, M.A., Bohn, K., Cao, Y., Carter, G., Cäsar, C., et al.: Acoustic sequences in non-human animals: a tutorial review and prospectus. Biol. Rev. (2014). doi:10.1111/brv.12160Google Scholar
  33. 33.
    Kershenbaum, A., Bowles, A.E., Freeberg, T.M., Jin, D.Z., Lameira, A.R., Bohn, K.: Animal vocal sequences: not the Markov chains we thought they were. Proc. R. Soc. B: Biol. Sci. 281(1792) (2014) 20141370. doi:10.1098/rspb.2014.1370CrossRefGoogle Scholar
  34. 34.
    Kershenbaum, A., Root-Gutteridge, H., Habib, B., Koler-Matznick, J., Mitchell, B., Palacios, V., Waller, S.: Disentangling canid howls across multiple species and subspecies: structure in a complex communication channel. Behav. Process. 124, 149–157 (2016). doi:10.1016/j.beproc.2016.01.006CrossRefGoogle Scholar
  35. 35.
    Kohlsdorf, D., Herzing, D., Starner, T.: Feature learning and automatic segmentation for dolphin communication analysis. In: Interspeech 2016. International Speech Communication Association (2016). doi:10.21437/interspeech.2016-748. http://dx.doi.org/10.21437/Interspeech.2016-748
  36. 36.
    Kojima, R., Sugiyama, O., Suzuki, R., Nakadai, K., Taylor, C.E.: Semi-automatic bird song analysis by spatial-cue-based integration of sound source detection, localization, separation, and identification. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1287–1292. IEEE, New York (2016). doi:10.1109/IROS.2016.7759213Google Scholar
  37. 37.
    Lachlan, R.F., Nowicki, S.: Context-dependent categorical perception in a songbird. Proc. Natl. Acad. Sci. 112(6), 1892–1897 (2015). doi:10.1121/1.4933900CrossRefGoogle Scholar
  38. 38.
    Lachlan, R., Verhagen, L., Peters, S., ten Cate, C.: Are there species-universal categories in bird song phonology and syntax? A comparative study of chaffinches (Fringilla coelebs), zebra finches (Taenopygia guttata), and swamp sparrows (Melospiza georgiana). J. Comp. Psychol. 124(1), 92 (2010). doi:10.1037/a0016996CrossRefGoogle Scholar
  39. 39.
    Lachlan, R.F., Verzijden, M.N., Bernard, C.S., Jonker, P.P., Koese, B., Jaarsma, S., Spoor, W., Slater, P.J., ten Cate, C.: The progressive loss of syntactical structure in bird song along an island colonization chain. Curr. Biol. 23(19), 1896–1901 (2013). doi:10.1016/j.cub.2013.07.057CrossRefGoogle Scholar
  40. 40.
    Laiolo, P.: The emerging significance of bioacoustics in animal species conservation. Biol. Conserv. 143(7), 1635–1645 (2010). doi:10.1016/j.biocon.2010.03.025CrossRefGoogle Scholar
  41. 41.
    Lasseck, M.: Bird song classification in field recordings: winning solution for NIPS4B 2013 competition. In: Glotin, H., LeCun, Y., Artières, T., Mallat, S., Tchernichovski, O., Halkias, X. (eds.) Neural Information Processing Scaled for Bioacoustics, from Neurons to Big Data, USA, pp. 176–181 (2013). http://sabiod.org/NIPS4B2013_book.pdf
  42. 42.
    Lellouch, L., Pavoine, S., Jiguet, F., Glotin, H., Sueur, J.: Monitoring temporal change of bird communities with dissimilarity acoustic indices. Methods Ecol. Evol. 5(6), 495–505 (2014). doi:10.1111/2041-210X.12178CrossRefGoogle Scholar
  43. 43.
    Lewis, J.: Fast normalized cross-correlation. Vis. Interface 10(1), 120–123 (1995)Google Scholar
  44. 44.
    Lipkind, D., Tchernichovski, O.: Quantification of developmental birdsong learning from the subsyllabic scale to cultural evolution. Proc. Natl. Acad. Sci. (2011). doi:10.1073/pnas.1012941108Google Scholar
  45. 45.
    Mallat, S.: Group invariant scattering. Commun. Pure Appl. Math. 65(10), 1331–1398 (2012). http://arxiv.org/abs/1101.2286 MathSciNetCrossRefMATHGoogle Scholar
  46. 46.
    Marler, P.R., Slabbekoorn, H.: Nature’s Music: The Science of Birdsong. Academic Press, New York, MA (2004)Google Scholar
  47. 47.
    Marques, T.A., Thomas, L., Martin, S.W., Mellinger, D.K., Ward, J.A., Moretti, D.J., Harris, D., Tyack, P.L.: Estimating animal population density using passive acoustics. Biol. Rev. (2012). doi:10.1111/brv.12001Google Scholar
  48. 48.
    McIlraith, A.L., Card, H.C.: Birdsong recognition using backpropagation and multivariate statistics. IEEE Trans. Signal Process. 45(11), 2740–2748 (1997). doi:10.1109/78.650100CrossRefGoogle Scholar
  49. 49.
    Mellinger, D., Martin, S., Morrissey, R., Thomas, L., Yosco, J.: A method for detecting whistles, moans, and other frequency contour sounds. J. Acoust. Soc. Am. 4055–4061 (2010). doi:10.1121/1.3531926Google Scholar
  50. 50.
    Mennill, D.J., Burt, J.M., Fristrup, K.M., Vehrencamp, S.L.: Accuracy of an acoustic location system for monitoring the position of duetting songbirds in tropical forest. J. Acoust. Soc. Am. 119, 2832–2839 (2006). doi:10.1121/1.2184988. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2247711/
  51. 51.
    Menze, S., Zitterbart, D.P., van Opzeeland, I., Boebel, O.: The influence of sea ice, wind speed and marine mammals on southern ocean ambient sound. R. Soc. Open Sci. 4(1), 160370 (2017). doi:10.1098/rsos.160370. https://doi.org/10.1098%2Frsos.160370 CrossRefGoogle Scholar
  52. 52.
    Mercado III, E., Sturdy, C.B.: Classifying animal sounds with neural networks. In: Brown, C.H., Riede, T. (eds.) Comparative Bioacoustics: An Overview, Chap. 10. Bentham Science Publishers, Oak Park, IL (2016)Google Scholar
  53. 53.
    Montavon, G., Orr, G., Müller, K.R. (eds.): Neural Networks: Tricks of the Trade. Springer, New York (2012)Google Scholar
  54. 54.
    Mporas, I., Ganchev, T., Kocsis, O., Fakotakis, N., Jahn, O., Riede, K., Schuchmann, K.L.: Automated acoustic classification of bird species from real-field recordings. In: 2012 IEEE 24th International Conference on Tools with Artificial Intelligence, vol. 1, pp. 778–781. IEEE, New York (2012). doi:10.1109/ICTAI.2012.110Google Scholar
  55. 55.
    Murphy, K.P.: Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge (2012)MATHGoogle Scholar
  56. 56.
    Murphy, K.P., Paskin, M.A.: Linear-time inference in hierarchical HMMs. In: Advances in Neural Information Processing Systems, vol. 2, pp. 833–840 (2002)Google Scholar
  57. 57.
    North American Bird Conservation Initiative: State of North America’s birds 2016. Tech. rep., Environment and Climate Change Canada, Ottawa, Ontario (2016). http://www.stateofthebirds.org/2016/state-of-the-birds-2016-pdf-download/
  58. 58.
    Padgham, M.: Reverberation and frequency attenuation in forests—implications for acoustic communication in animals. J. Acoust. Soc. Am. 115, 402 (2004). doi:10.1121/1.1629304CrossRefGoogle Scholar
  59. 59.
    Perez, E.C., Fernandez, M.S.A., Griffith, S.C., Vignal, C., Soula, H.A.: Impact of visual contact on vocal interaction dynamics of pair-bonded birds. Anim. Behav. 107, 125–137 (2015). doi:10.1016/j.anbehav.2015.05.019CrossRefGoogle Scholar
  60. 60.
    Pieretti, N., Farina, A., Morri, D.: A new methodology to infer the singing activity of an avian community: the Acoustic Complexity Index (ACI). Ecol. Indic. 11(3), 868–873 (2011). doi:10.1016/j.ecolind.2010.11.005CrossRefGoogle Scholar
  61. 61.
    Podos, J., Moseley, D.L., Goodwin, S.E., McClure, J., Taft, B.N., Strauss, A.V., Rega-Brodsky, C., Lahti, D.C.: A fine-scale, broadly applicable index of vocal performance: frequency excursion. Anim. Behav. 116, 203–212 (2016). doi:10.1016/j.anbehav.2016.03.036CrossRefGoogle Scholar
  62. 62.
    Ptacek, L., Machlica, L., Linhart, P., Jaska, P., Muller, L.: Automatic recognition of bird individuals on an open set using as-is recordings. Bioacoustics 25(1), 55–73 (2016). doi:10.1080/09524622.2015.1089524CrossRefGoogle Scholar
  63. 63.
    Ranft, R.: Natural sound archives: past, present and future. Anais da Academia Brasileira de Ciências 76(2), 456–460 (2004). doi:10.1590/S0001-37652004000200041CrossRefGoogle Scholar
  64. 64.
    Ren, Y., Johnson, M., Clemins, P., Darre, M., Glaeser, S., Osiejuk, T., Out-Nyarko, E.: A framework for bioacoustic vocalization analysis using hidden Markov models. Algorithms 2(4), 1410–1428 (2009). doi:10.3390/a2041410CrossRefGoogle Scholar
  65. 65.
    Ross, J.C., Allen, P.E.: Random forest for improved analysis efficiency in passive acoustic monitoring. Ecol. Inform. (2013). doi:10.1016/j.ecoinf.2013.12.002Google Scholar
  66. 66.
    Ruiz-Muñoz, J., You, Z., Raich, R., Fern, X.Z.: Dictionary learning for bioacoustics monitoring with applications to species classification. J. Signal Process. Syst. 1–15 (2016). doi:10.1007/s11265-016-1155-0Google Scholar
  67. 67.
    Sandsten, M., Ruse, M.G., Jönsson, M.: Robust feature representation for classification of bird song syllables. EURASIP J. Adv. Signal Process. 2016(1) (2016). doi:10.1186/s13634-016-0365-8. http://dx.doi.org/10.1186/s13634-016-0365-8
  68. 68.
    Scott Brandes, T.: Automated sound recording and analysis techniques for bird surveys and conservation. Bird Conserv. Int. 18(S1), 163–173 (2008). doi:10.1017/S0959270908000415CrossRefGoogle Scholar
  69. 69.
    Somervuo, P., Härma, A., Fagerlund, S.: Parametric representations of bird sounds for automatic species recognition. IEEE Trans. Audio Speech Lang. Process. 14(6), 2252–2263 (2006). doi:10.1109/TASL.2006.872624CrossRefGoogle Scholar
  70. 70.
    Stein, R.C.: Modulation in bird sounds. The Auk 85(2), 229–243 (1968). doi:10.2307/4083583CrossRefGoogle Scholar
  71. 71.
    Stoddard, P.K., Owren, M.J.: Filtering in bioacoustics. In: Brown, C.H., Riede, T. (eds.) Comparative Bioacoustics: An Overview, Chap. 7 Bentham Science Publishers, Oak Park, IL (2016)Google Scholar
  72. 72.
    Stowell, D., benetos, E., Gill, L.F.: On-bird sound recordings: Automatic acoustic recognition of activities and contexts. IEEE/ACM Trans. Audio Speech Lang. Process. 25(6), 1193–1206 (2017)Google Scholar
  73. 73.
    Stowell, D., Gill, L.F., Clayton, D.: Detailed temporal structure of communication networks in groups of songbirds. J. R. Soc. Interface 13(119) (2016). doi:10.1098/rsif.2016.0296Google Scholar
  74. 74.
    Stowell, D., Plumbley, M.D.: Segregating event streams and noise with a Markov renewal process model. J. Mach. Learn. Res. 14, 1891–1916 (2013). http://jmlr.org/papers/v14/stowell13a.html MathSciNetMATHGoogle Scholar
  75. 75.
    Stowell, D., Plumbley, M.D.: Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning. PeerJ 2, e488 (2014). doi:10.7717/peerj.488CrossRefGoogle Scholar
  76. 76.
    Stowell, D., Plumbley, M.D.: Large-scale analysis of frequency modulation in birdsong databases. Methods Ecol. Evol. (2014). doi:10.1111/2041-210X.12223. http://arxiv.org/abs/1311.4764 Google Scholar
  77. 77.
    Stowell, D., Wood, M., Stylianou, Y., Glotin, H.: Bird detection in audio: a survey and a challenge. In: Proceedings of MLSP 2016 (2016)Google Scholar
  78. 78.
    Sueur, J., Farina, A.: Ecoacoustics: the ecological investigation and interpretation of environmental sound. Biosemiotics 1–10 (2015). doi:10.1007/s12304-015-9248-xGoogle Scholar
  79. 79.
    Sueur, J., Farina, A., Gasc, A., Pieretti, N., Pavoine, S.: Acoustic indices for biodiversity assessment and landscape investigation. Acta Acustica United with Acustica 100(4), 772–781 (2014). doi:10.3813/AAA.918757CrossRefGoogle Scholar
  80. 80.
    Sueur, J., Pavoine, S., Hamerlynck, O., Duvail, S.: Rapid acoustic survey for biodiversity appraisal. PLoS One 3(12), e4065 (2008). doi:10.1371/journal.pone.0004065CrossRefGoogle Scholar
  81. 81.
    Suzuki, R., Matsubayashi, S., Nakadai, K., Okuno, H.G.: Localizing bird songs using an open source robot audition system with a microphone array. In: Interspeech 2016. International Speech Communication Association (2016). doi:10.21437/interspeech.2016-782. http://dx.doi.org/10.21437/Interspeech.2016-782
  82. 82.
    Tchernichovski, O., Nottebohm, F., Ho, C.E., Pesaran, B., Mitra, P.P.: A procedure for an automated measurement of song similarity. Anim. Behav. 59(6), 1167–1176 (2000). doi:10.1006/anbe.1999.1416CrossRefGoogle Scholar
  83. 83.
    Ter Maat, A., Trost, L., Sagunsky, H., Seltmann, S., Gahr, M.: Zebra finch mates use their forebrain song system in unlearned call communication. PLoS One 9(10), e109334 (2014). doi:10.1371/journal.pone.0109334CrossRefGoogle Scholar
  84. 84.
    The state of nature in the UK and its overseas territories. Tech. rep., RSPB and 24 other UK organisations (2013). http://www.rspb.org.uk/ourwork/projects/details/363867-the-state-of-nature-report
  85. 85.
    Towsey, M., Planitz, B., Nantes, A., Wimmer, J., Roe, P.: A toolbox for animal call recognition. Bioacoustics 21(2), 107–125 (2012). doi:10.1080/09524622.2011.648753CrossRefGoogle Scholar
  86. 86.
    Towsey, M., Zhang, L., Cottman-Fields, M., Wimmer, J., Zhang, J., Roe, P.: Visualization of long-duration acoustic recordings of the environment. Proc. Comput. Sci. 29, 703–712 (2014). doi:10.1016/j.procs.2014.05.063. http://dx.doi.org/10.1016/j.procs.2014.05.063 CrossRefGoogle Scholar
  87. 87.
    Vannoni, E., McElligott, A.: Fallow bucks get hoarse: vocal fatigue as a possible signal to conspecifics. Anim. Behav. 78(1), 3–10 (2009). doi:10.1016/j.anbehav.2009.03.015CrossRefGoogle Scholar
  88. 88.
    Vannoni, E., McElligott, A.G.: Low frequency groans indicate larger and more dominant fallow deer (Dama dama) males. PLoS One 3(9), e3113 (2008). doi:10.1371/journal.pone.0003113CrossRefGoogle Scholar
  89. 89.
    Ventura, T.M., de Oliveira, A.G., Ganchev, T.D., de Figueiredo, J.M., Jahn, O., Marques, M.I., Schuchmann, K.L.: Audio parameterization with robust frame selection for improved bird identification. Expert Syst. Appl. (2015). doi:10.1016/j.eswa.2015.07.002. http://dx.doi.org/10.1016/j.eswa.2015.07.002 Google Scholar
  90. 90.
    Vincent, E., Araki, S., Theis, F., Nolte, G., Bofill, P., Sawada, H., Ozerov, A., Gowreesunker, V., Lutter, D., Duong, N.Q.: The signal separation evaluation campaign (2007–2010): achievements and remaining challenges. Signal Process. 92(8), 1928–1936 (2012). doi:10.1016/j.sigpro.2011.10.007CrossRefGoogle Scholar
  91. 91.
    Vitter, J.S.: Random sampling with a reservoir. ACM Trans. Math. Softw. 11(1), 37–57 (1985). doi:10.1145/3147.3165. http://dx.doi.org/10.1145/3147.3165 MathSciNetCrossRefMATHGoogle Scholar
  92. 92.
    wa Maina, C., Muchiri, D., Njoroge, P.: A bioacoustic record of a conservancy in the Mount Kenya ecosystem. Biodivers. Data J. 4, e9906 (2016). doi:10.3897/BDJ.4.e9906. http://dx.doi.org/10.3897/BDJ.4.e9906
  93. 93.
    Walters, C.L., Freeman, R., Collen, A., Dietz, C., Brock Fenton, M., Jones, G., Obrist, M.K., Puechmaille, S.J., Sattler, T., Siemers, B.M., et al.: A continental-scale tool for acoustic identification of European bats. J. Appl. Ecol. 49, 1064–1074 (2012). doi:10.1111/j.1365-2664.2012.02182.xCrossRefGoogle Scholar
  94. 94.
    Webster, M.S., Budney, G.F.: Sound archives and media specimens in the 21st century. In: Brown, C.H., Riede, T. (eds.) Comparative Bioacoustics: An Overview, Chap. 11 Bentham Science Publishers, Oak Park, IL (2016)Google Scholar
  95. 95.
    Wichern, G., Xue, J., Thornburg, H., Mechtley, B., Spanias, A.: Segmentation, indexing, and retrieval for environmental and natural sounds. IEEE Trans. Audio Speech Lang. Process. 18(3), 688–707 (2010). doi:10.1109/TASL.2010.2041384CrossRefGoogle Scholar
  96. 96.
    Wiley, R.H.: Associations of song properties with habitats for territorial oscine birds of eastern North America. Am. Nat. 973–993 (1991)Google Scholar
  97. 97.
    Williams, H., Levin, I., Norris, D., Newman, A., Wheelwright, N.: Three decades of cultural evolution in savannah sparrow songs. Anim. Behav. 85 (2013). doi:10.1016/j.anbehav.2012.10.028Google Scholar
  98. 98.
    Wilson, D.R., Ratcliffe, L.M., Mennill, D.J.: Black-capped chickadees, poecile atricapillus, avoid song overlapping: evidence for the acoustic interference hypothesis. Anim. Behav. 114, 219–229 (2016). doi:10.1016/j.anbehav.2016.02.002. http://dx.doi.org/10.1016/j.anbehav.2016.02.002 CrossRefGoogle Scholar
  99. 99.
    Zann, R.A.: The Zebra Finch: A Synthesis of Field and Laboratory Studies, vol. 5. Oxford University Press, Oxford (1996)Google Scholar
  100. 100.
    Zuidema, W.: Context-freeness revisited. In: Proceedings of CogSci 2013 (2013)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Machine Listening Lab, Centre for Digital MusicQueen Mary University of LondonLondonUK

Personalised recommendations