Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Bird Song Diamond in Deep Space 8k

Abstract

The Bird Song Diamond (BSD) project is a series of multifaceted and multidisciplinary installations with the aim of bringing contemporary research on bird communication to a large public audience. Using art and technology to create immersive experiences, BSD allows large audiences to embody bird communication rather than passively observe. In particular, BSD Mimic, a system for mimicking bird song, asks participants to grapple with both audition and vocalization of birdsong. The use of interactive installations for public outreach provides unique experiences to a diverse audience, while providing direct feedback for artists and researchers interested in the success of such outreach. By following an iterative design process, both artists and researchers have been able to evaluate the effectiveness of each installation for promoting audience engagement with the subject matter. The execution and evaluation of each iteration of BSD is described throughout the paper. In addition, the process of interdisciplinary collaboration in our project has led to a more defined role of the artist as a facilitator of specialists. BSD Mimic has also led to further questions about the nature of audience collaboration for an engaged experience.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Notes

  1. 1.

    Parametric speakers use ultrasonic frequencies to minimize the natural spread of waves over distance. Audible sound is used to modulate an ultrasonic wave which is emitted from the parametric speaker, while a second, unmodulated, ultrasonic carrier wave is also sent from the same speaker. When the two waveforms collide with an object, they demodulate to produce audible sound that is the difference between the modulating and carrier waves (Woodford 2018).

  2. 2.

    FFT is a signal processing technique in which a complex waveform, in this case sound, is broken down into its sinusoidal components. Using this technique, one can estimate the most prominent frequency of a sound, usually the pitch, by finding the frequency of the component wave with the highest amplitude.

  3. 3.

    8k resolution being 8192 × 4320 pixels. The system also uses a refresh rate of 120 Hz (Sick-Leitner 2015).

  4. 4.

    See the Deep Space 8k Mimic Scatterplot in Supplementary Materials for the overall plot of participant mimic attempts against the existing birdsong.

  5. 5.

    OpenGL is an application programming interface (API) used to render computer imagery (OpenGL 2018).

  6. 6.

    User Datagram Protocol (UDP), a communication protocol which is part of the Internet protocol suite (along with TCP and IP). OSC mentioned before is built on top of UDP (Postel 1980).

  7. 7.

    General Purpose Computing on Graphics Processing Units (GPGPU) uses multiple graphics cards for parallel processing of intensive calculations (Mung and Mann 2004).

  8. 8.

    A successor to the h.264 video compression standard, HEVC, was necessary to reduce file size for 8k resolution video (Sullivan et al. 2012).

  9. 9.

    See Supplementary Materials for more detailed explanation of the pan-tilt system.

  10. 10.

    Compatibility with Kinect v2’s HD face detection algorithm was also implemented to enable speakers to follow the tracked face of a participant.

  11. 11.

    The raw data from the performance can be accessed here: https://bitbucket.org/johnbrumley/bsd-datasets.

References

  1. Arriaga JG, Sanchez H, Hedley R, Vallejo EE, Taylor CE (2014) Using Song to identify Cassin’s vireo individuals. A comparative study of pattern recognition algorithms. In: Martínez-Trinidad JF, Carrasco-Ochoa JA, Olvera-Lopez JA, Salas-Rodríguez J, Suen CY (eds) Pattern recognition. Springer International Publishing, New York, pp 291–300

  2. Berkhout AJ (1988) A holographic approach to acoustic control. J Audio Eng Soc 36:977–995

  3. Boursier-Mougenot C (1999) From Here to Ear [Zebra Finches, Electric Guitars]. Traveling Installation

  4. Brancusi C (1928) Bird in Space. [Bronze, 54 × 8 1/2 × 6 1/2″ (137.2 × 21.6 × 16.5 cm)]. Museum of Modern Art, New York

  5. Bugler C (2012) The bird in art. Merrell, London

  6. Calder A (1971) Eagle [Steel, Painted]. Seattle Art Museum

  7. Carlbom I (1994) Modeling and visualization of empirical data. In: Rogers DF, Earnshaw RA (eds) State of the art in computer graphics: aspects of visualization. Springer New York, New York, pp 19–65. https://doi.org/10.1007/978-1-4612-4306-9_3

  8. Chacin AC, Jazbec M, Oka M, Doi I (2016) Bird Song Diamond: call and response and phase transition work. In: The twenty-first international symposium on artificial life and robotics 2016 (AROB 21st 2016), Beppu, Japan

  9. Chertow MR (2008) The IPAT equation and its variants: changing views of technology and environmental impact. In: Mitchell RB (ed) SAGE library of international relations: international environmental politics, vol 4. SAGE Publications Ltd., London, pp 87–87. https://doi.org/10.4135/9781446262108.n5

  10. Chiba N, Sumitani S, Matsubayashi R, Suzuki R, Arita T, Nakadai K, Okuno HG (2017) An improvement of HARKBird: a wild bird song observation and analysis tool based on an open-source robot audition software HARK. In: Proceedings of the 35th annual conference of the Robotics Society of Japan, RSJ2017ACA3-03

  11. Crist E (2013) On the poverty of our nomenclature. Environ Hum 3:129–147. https://doi.org/10.1215/22011919-3611266

  12. Cruz-Neira C, Sandin DJ, DeFanti TA (1993) Surround-screen projection-based virtual reality: the design and implementation of the CAVE. In: Proceedings of the 20th annual conference on computer graphics and interactive techniques (SIGGRAPH ‘93). ACM, New York, NY, USA, pp 135–142. https://doi.org/10.1145/166117.166134

  13. Cycling’74 (2018) Max software tools for media. https://cycling74.com/products/max/. Accessed 10 Feb 2018

  14. Dooling RJ (1982) Auditory perception in birds. In: Kroodsma DE, Miller EH (eds) Acoustic communication in birds, vol 1. Academic, New York, pp 95–130

  15. Ehnes J (2010) An audio visual projection system for virtual room inhabitants. In: 20th international conference on artificial reality and telexistence Proceedings, p 118

  16. Empowerment Informatics (エンパワーメント情報学 ) (2014) http://www.emp.tsukuba.ac.jp/english/environment/research.php. Accessed 10 Feb 2018

  17. FFmpeg (2017) FFmpeg. https://www.ffmpeg.org/. Accessed 10 Feb 2018

  18. Gibson JJ (1986) The ecological approach to visual perception. Lawrence Erlbaum Associates, Hillsdale

  19. Graham M (2011) Through birds’ eyes: insights into avian sensory ecology. J Ornithol. https://doi.org/10.1007/s10336-011-0771-5

  20. Harraway D (2016) Tentacular thinking: Anthropocene, Capitalocene, Chthulucene. E-Flux J. http://www.e-flux.com/journal/75/67125/tentacular-thinking-anthropocene-capitalocene-chthulucene/. Accessed 9 Feb 2018

  21. Head M (1997) Birdsong and the origins of music. J R Mus Assoc 122(1):1–23. https://doi.org/10.1093/jrma/122.1.1

  22. Hedley R (2016) Composition and sequential organization of song repertoires in Cassin’s vireo (Vireo cassinii). J Ornithol 157:13–22. https://doi.org/10.1007/s10336-381015-1238-x

  23. Hein HS (1990) The exploratorium: the museum as laboratory. Smithsonian Institution Press, Washington

  24. Ikegami T, Oka M, Maruyama N, Matsumoto A, Watanabe Y (2012) Sensing the sound web. In: Art gallery at the 5th ACM SIGGRAPH conference and exhibition on computer graphics and interactive techniques in Asia, exhibited

  25. Ikegami T, Mototake Y-I, Kobori S, Oka M, Hashimoto Y (2017) Life as an emergent phenomenon: studies from a large-scale boid simulation and web data. Philos Trans Ser A Math Phys Eng Sci. https://doi.org/10.1098/rsta.2016.0351

  26. Kac E, Bennett E, Connell B, Peragine J, Bynaker C, Lindsay M (1996) Rara Avis. http://www.ekac.org/raraavis.html. Accessed 10 Feb 2018

  27. Kelley M (1978) Birdhouses [Wood, Paint]. Mike Kelley Foundation

  28. Kojima R, Sugiyama O, Hoshiba K, Nakadai K, Suzuki R, Taylor CE (2017) Bird Song scene analysis using a spatial-cue-based probabilistic model (special issue on robot audition technologies). J Robot Mechatron 29:236–246

  29. Kraft D (2013) Birdsong in the music of Olivier Messiaen. Arosa Press, London

  30. Krause B (1987) Bioacoustics, habitat ambience in ecological balance. Whole Earth Rev 57:14–18

  31. Kuka D et al (2009) DEEP SPACE: high resolution VR platform for multi-user interactive narratives. In: Iurgel IA, Zagalo N, Petta P (eds) Interactive storytelling. ICIDS 2009, lecture notes in computer science, vol 5915. Springer, Berlin

  32. Legrady G, Pinter M, Bazo D (2013) Swarm Vision [3 custom designed rails each with Sony PTZ camera, custom software animation, Apple MacPro, 2 projectors (Panasonic PT-DZ6710U or equivalent) or 2 HD large screens, dimensions variable]. https://www.mat.ucsb.edu/g.legrady/glWeb/Projects/sv/swarmvision.html. Accessed 12 Feb 2018

  33. Lynxmotion (2017) SSC-32U USB Servo Controller Board user guide. http://www.lynxmotion.com/images/data/lynxmotion_ssc-32u_usb_user_guide.pdf. Accessed 10 Feb 2018

  34. Lyons M, Brandis K, Callaghan C, McCann J, Mills C, Ryall S, Kingsford R (2017) Bird interactions with drones, from individuals to large colonies. bioRxiv. https://doi.org/10.1101/109926

  35. Malm A (2018) The progress of this storm: on society and nature in a warming world. Verso, London

  36. Maruyama N, Oka M, Ikegami T (2013) Creating space-time affordances via an autonomous sensor network—semantic scholar. In: 2013 IEEE symposium on artificial life (ALife), pp 67–73

  37. Maruyama N, Doi I, Masumori A, Oka M, Ikegami T, Vesna V, Taylor C (2014) Evolution of artificial soundscape in a natural environment. In: Exploiting synergies between biology and artificial life technologies: tools, possibilities, and examples at ALIFE, p 14

  38. Massumi B (2002) Parables for the virtual: movement, affect, sensation. Duke University Press, Durham

  39. Milk C, Tricklebank B, George J, Meyers A, Chasalow B (2012) The Treachery of Sanctuary [Projection]. Traveling Installation

  40. Mototake Y, Ikegami T (2015) A simulation study of large scale swarms. SWARM 2015, Kyoto University, Kyoto, pp 446–450

  41. Mung J, Mann S (2004) Using multiple graphics cards as a general purpose parallel computer: applications to computer vision. In: Proceedings of the 17th international conference on pattern recognition (ICPR2004). Cambridge, United Kingdom, vol 1, pp 805–808

  42. Nagel T (1974) What is it like to be a bat? Philos Rev 83(4):435–450. https://doi.org/10.2307/2183914

  43. Norihiro M, Doi I, Masumori A, Oka M, Ikegami T, Vesna V, Taylor C (2014) Evolution of artificial soundscape in a natural environment. In: Exploiting synergies between biology and artificial life technologies: tools, possibilities, and examples, ALIFE, p 14

  44. OpenGL (2018) OpenGL—the industry standard for high performance graphics. https://www.opengl.org/. Accessed 10 Feb 2018

  45. Pijanowski B, Villanueva-Rivera L, Dumyahn S, Farina A, Krause B, Napoletano B, Pieretti N (2011) Soundscape ecology: the science of sound in the landscape. BioScience 61(3):203–216. https://doi.org/10.1525/bio.2011.61.3.6

  46. Pompei FJ (1999) The use of airborne ultrasonics for generating audible sound beams. J Audio Eng Soc 47(9):726–731

  47. Postel J (1980) RFC 768: user datagram protocol. https://tools.ietf.org/html/rfc768. Accessed 10 Feb 2018

  48. Reas C, Fry B (2006) Processing: programming for the media arts. AI & Soc 20:526. https://doi.org/10.1007/s00146-006-0050-9

  49. Renderheads (2018) AVPro Video. Retrieved February 10, 2018, from http://renderheads.com/product/avpro-video/

  50. Reynolds CW (1987) Flocks, herds, and schools: a distributed behavioral model. In: ACM SIGGRAPH computer graphics, pp 21–25

  51. Sasahara K, Cody ML, Cohen D, Taylor CE (2012) Structural design principles of complex bird songs: a network-based approach. PLoS One 7(9):e44436. https://doi.org/10.1371/journal.pone.0044436

  52. Schmeder A, Freed A, Wessel D (2010) Best practices for open sound control. In: Linux audio conference, Utrecht, NL

  53. Shi C, Gan WS (2010) Development of parametric loudspeaker. IEEE Potentials 29(6):20–24. https://doi.org/10.1109/MPOT.2010.938148

  54. Sick-Leitner M (2015) Deep Space 8k—the next generation—Ars Electronica feature. https://www.aec.at/feature/en/deep-space-8k/. Accessed 10 Feb 2018

  55. Simon T (2014) Birds of the West Indies. Gagosian Gallery, Los Angeles

  56. Sosolimited H, Design P (2016) Diffusion Choir. Biomed Realty, Cambridge

  57. Stengers I, Goffey A (2015) In catastrophic times: resisting the coming barbarism. Open Humanities Press, London

  58. Sullivan GJ, Ohm JR, Han WJ, Wiegand T (2012) Overview of the high efficiency video coding (HEVC) standard. IEEE Trans Circuits Syst Video Technol 22(12):1649–1668. https://doi.org/10.1109/TCSVT.2012.2221191

  59. Sumitani S, Suzuki R, Arita T, Naren, Matsubayashi S, Nakadai K, Okuno HG (2017) Field observations and virtual experiences of bird songs in the soundscape using an open-source software for robot audition HARK. In: abstract book of 4th international symposium on acoustic communication by animals, pp 116–117

  60. Suzuki R, Matsubayash S, Hedley R, Nakada K, Okuno HG (2017) HARKBird: exploring acoustic interactions in bird communities using a microphone array. J Robot Mechatron 29:213–223

  61. Takatori H, Enzaki Y, Yano H, Iwata H (2016) Development of the large scale immersive display LargeSpace. Nihon Virtual Reality Gakai Ronbunshi 21(3):493–502

  62. Taylor C, Brumley JT, Hedley R, Cody ML (2017) Sensitivity of California thrashers (Toxostoma redivivum) to song syntax. Bioacoustics 26:259–270. https://doi.org/10.1080/09524622.2016.1274917

  63. Thrift N (2007) Non-representational theory: space, politics, affect. Routledge, London

  64. Tuchman M (1971) Art and technology: a report on the Art and Technology Program of the Los Angeles County Museum of Art, 1967–1971. Los Angeles County Museum of Art; distributed by the Viking Press, New York

  65. Unity3D (2018) Unity Game Engine [online] http://unity3d.com/. Accessed 1 Feb 2018

  66. Wainwright J, Mann G (2018) Climate Leviathan. Verso Books, London

  67. Wark MK (2016) Molecular Red: theory for the Anthropocene. Verso Books, London

  68. Wilson S, Cottle D, Collins N (2011) The Supercollider Book. The MIT Press, Cambridge

  69. Woodford C (2018) Directional loudspeakers—how they work. http://www.explainthatstuff.com/directional-loudspeakers.html. Accessed 19 Feb 2018

  70. Yoneyama M, Fujimoto J-i, Kawamo Y, Sasabe S (1983) The audio spotlight: an application of nonlinear interaction of sound waves to a new type of loudspeaker design. J Acoust Soc Am 73(5):1532–1536

  71. Yu K, Yin M, Luo J-A, Wang Y, Bao M, Hu Y-H, Wang Z (2016) Wireless sensor array network DoA estimation from compressed array data via joint sparse representation. Sensors (Basel, Switzerland) 16(5):686. https://doi.org/10.3390/s16050686

Download references

Acknowledgements

Research was supported by the National Science Foundation (Grant ID: 1125423). Additional support provided by the Program for Empowerment Informatics at the University of Tsukuba and The University of California, Los Angeles. Martin Cody provided many of the recordings used throughout BSD. For people who have been involved with the BSD project over its various incarnations: Naoaki, Chiba, Jun Mitani, Yan Zhao, Joel Ong, Max Kazemzadeh, Itsuki Doi, Norihiro Maruyama, Hikaru Takatori, Aisen Chacin, Masa Jazbec, Takeshi Oozu, Mary Tsang, Carol Parkinson, Linda Weintraub, and many others.

Author information

Correspondence to John Brumley.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 325 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Brumley, J., Taylor, C., Suzuki, R. et al. Bird Song Diamond in Deep Space 8k. AI & Soc 35, 87–101 (2020). https://doi.org/10.1007/s00146-018-0862-4

Download citation

Keywords

  • Birdsong
  • Collaboration
  • Iterative design
  • Language
  • Interspecies communication
  • Site/habitat specificity