Advertisement

AI & SOCIETY

pp 1–15 | Cite as

Bird Song Diamond in Deep Space 8k

  • John BrumleyEmail author
  • Charles Taylor
  • Reiji Suzuki
  • Takashi Ikegami
  • Victoria Vesna
  • Hiroo Iwata
Original Article

Abstract

The Bird Song Diamond (BSD) project is a series of multifaceted and multidisciplinary installations with the aim of bringing contemporary research on bird communication to a large public audience. Using art and technology to create immersive experiences, BSD allows large audiences to embody bird communication rather than passively observe. In particular, BSD Mimic, a system for mimicking bird song, asks participants to grapple with both audition and vocalization of birdsong. The use of interactive installations for public outreach provides unique experiences to a diverse audience, while providing direct feedback for artists and researchers interested in the success of such outreach. By following an iterative design process, both artists and researchers have been able to evaluate the effectiveness of each installation for promoting audience engagement with the subject matter. The execution and evaluation of each iteration of BSD is described throughout the paper. In addition, the process of interdisciplinary collaboration in our project has led to a more defined role of the artist as a facilitator of specialists. BSD Mimic has also led to further questions about the nature of audience collaboration for an engaged experience.

Keywords

Birdsong Collaboration Iterative design Language Interspecies communication Site/habitat specificity 

Notes

Acknowledgements

Research was supported by the National Science Foundation (Grant ID: 1125423). Additional support provided by the Program for Empowerment Informatics at the University of Tsukuba and The University of California, Los Angeles. Martin Cody provided many of the recordings used throughout BSD. For people who have been involved with the BSD project over its various incarnations: Naoaki, Chiba, Jun Mitani, Yan Zhao, Joel Ong, Max Kazemzadeh, Itsuki Doi, Norihiro Maruyama, Hikaru Takatori, Aisen Chacin, Masa Jazbec, Takeshi Oozu, Mary Tsang, Carol Parkinson, Linda Weintraub, and many others.

Supplementary material

146_2018_862_MOESM1_ESM.docx (326 kb)
Supplementary material 1 (DOCX 325 KB)

References

  1. Arriaga JG, Sanchez H, Hedley R, Vallejo EE, Taylor CE (2014) Using Song to identify Cassin’s vireo individuals. A comparative study of pattern recognition algorithms. In: Martínez-Trinidad JF, Carrasco-Ochoa JA, Olvera-Lopez JA, Salas-Rodríguez J, Suen CY (eds) Pattern recognition. Springer International Publishing, New York, pp 291–300Google Scholar
  2. Berkhout AJ (1988) A holographic approach to acoustic control. J Audio Eng Soc 36:977–995Google Scholar
  3. Boursier-Mougenot C (1999) From Here to Ear [Zebra Finches, Electric Guitars]. Traveling InstallationGoogle Scholar
  4. Brancusi C (1928) Bird in Space. [Bronze, 54 × 8 1/2 × 6 1/2″ (137.2 × 21.6 × 16.5 cm)]. Museum of Modern Art, New YorkGoogle Scholar
  5. Bugler C (2012) The bird in art. Merrell, LondonGoogle Scholar
  6. Calder A (1971) Eagle [Steel, Painted]. Seattle Art MuseumGoogle Scholar
  7. Carlbom I (1994) Modeling and visualization of empirical data. In: Rogers DF, Earnshaw RA (eds) State of the art in computer graphics: aspects of visualization. Springer New York, New York, pp 19–65.  https://doi.org/10.1007/978-1-4612-4306-9_3 CrossRefGoogle Scholar
  8. Chacin AC, Jazbec M, Oka M, Doi I (2016) Bird Song Diamond: call and response and phase transition work. In: The twenty-first international symposium on artificial life and robotics 2016 (AROB 21st 2016), Beppu, JapanGoogle Scholar
  9. Chertow MR (2008) The IPAT equation and its variants: changing views of technology and environmental impact. In: Mitchell RB (ed) SAGE library of international relations: international environmental politics, vol 4. SAGE Publications Ltd., London, pp 87–87.  https://doi.org/10.4135/9781446262108.n5 Google Scholar
  10. Chiba N, Sumitani S, Matsubayashi R, Suzuki R, Arita T, Nakadai K, Okuno HG (2017) An improvement of HARKBird: a wild bird song observation and analysis tool based on an open-source robot audition software HARK. In: Proceedings of the 35th annual conference of the Robotics Society of Japan, RSJ2017ACA3-03Google Scholar
  11. Crist E (2013) On the poverty of our nomenclature. Environ Hum 3:129–147.  https://doi.org/10.1215/22011919-3611266 CrossRefGoogle Scholar
  12. Cruz-Neira C, Sandin DJ, DeFanti TA (1993) Surround-screen projection-based virtual reality: the design and implementation of the CAVE. In: Proceedings of the 20th annual conference on computer graphics and interactive techniques (SIGGRAPH ‘93). ACM, New York, NY, USA, pp 135–142.  https://doi.org/10.1145/166117.166134
  13. Cycling’74 (2018) Max software tools for media. https://cycling74.com/products/max/. Accessed 10 Feb 2018
  14. Dooling RJ (1982) Auditory perception in birds. In: Kroodsma DE, Miller EH (eds) Acoustic communication in birds, vol 1. Academic, New York, pp 95–130CrossRefGoogle Scholar
  15. Ehnes J (2010) An audio visual projection system for virtual room inhabitants. In: 20th international conference on artificial reality and telexistence Proceedings, p 118Google Scholar
  16. Empowerment Informatics (エンパワーメント情報学 ) (2014) http://www.emp.tsukuba.ac.jp/english/environment/research.php. Accessed 10 Feb 2018
  17. FFmpeg (2017) FFmpeg. https://www.ffmpeg.org/. Accessed 10 Feb 2018
  18. Gibson JJ (1986) The ecological approach to visual perception. Lawrence Erlbaum Associates, HillsdaleGoogle Scholar
  19. Graham M (2011) Through birds’ eyes: insights into avian sensory ecology. J Ornithol.  https://doi.org/10.1007/s10336-011-0771-5 Google Scholar
  20. Harraway D (2016) Tentacular thinking: Anthropocene, Capitalocene, Chthulucene. E-Flux J. http://www.e-flux.com/journal/75/67125/tentacular-thinking-anthropocene-capitalocene-chthulucene/. Accessed 9 Feb 2018
  21. Head M (1997) Birdsong and the origins of music. J R Mus Assoc 122(1):1–23.  https://doi.org/10.1093/jrma/122.1.1 MathSciNetCrossRefGoogle Scholar
  22. Hedley R (2016) Composition and sequential organization of song repertoires in Cassin’s vireo (Vireo cassinii). J Ornithol 157:13–22.  https://doi.org/10.1007/s10336-381015-1238-x CrossRefGoogle Scholar
  23. Hein HS (1990) The exploratorium: the museum as laboratory. Smithsonian Institution Press, WashingtonGoogle Scholar
  24. Ikegami T, Oka M, Maruyama N, Matsumoto A, Watanabe Y (2012) Sensing the sound web. In: Art gallery at the 5th ACM SIGGRAPH conference and exhibition on computer graphics and interactive techniques in Asia, exhibitedGoogle Scholar
  25. Ikegami T, Mototake Y-I, Kobori S, Oka M, Hashimoto Y (2017) Life as an emergent phenomenon: studies from a large-scale boid simulation and web data. Philos Trans Ser A Math Phys Eng Sci.  https://doi.org/10.1098/rsta.2016.0351 Google Scholar
  26. Kac E, Bennett E, Connell B, Peragine J, Bynaker C, Lindsay M (1996) Rara Avis. http://www.ekac.org/raraavis.html. Accessed 10 Feb 2018
  27. Kelley M (1978) Birdhouses [Wood, Paint]. Mike Kelley FoundationGoogle Scholar
  28. Kojima R, Sugiyama O, Hoshiba K, Nakadai K, Suzuki R, Taylor CE (2017) Bird Song scene analysis using a spatial-cue-based probabilistic model (special issue on robot audition technologies). J Robot Mechatron 29:236–246CrossRefGoogle Scholar
  29. Kraft D (2013) Birdsong in the music of Olivier Messiaen. Arosa Press, LondonGoogle Scholar
  30. Krause B (1987) Bioacoustics, habitat ambience in ecological balance. Whole Earth Rev 57:14–18Google Scholar
  31. Kuka D et al (2009) DEEP SPACE: high resolution VR platform for multi-user interactive narratives. In: Iurgel IA, Zagalo N, Petta P (eds) Interactive storytelling. ICIDS 2009, lecture notes in computer science, vol 5915. Springer, BerlinGoogle Scholar
  32. Legrady G, Pinter M, Bazo D (2013) Swarm Vision [3 custom designed rails each with Sony PTZ camera, custom software animation, Apple MacPro, 2 projectors (Panasonic PT-DZ6710U or equivalent) or 2 HD large screens, dimensions variable]. https://www.mat.ucsb.edu/g.legrady/glWeb/Projects/sv/swarmvision.html. Accessed 12 Feb 2018
  33. Lynxmotion (2017) SSC-32U USB Servo Controller Board user guide. http://www.lynxmotion.com/images/data/lynxmotion_ssc-32u_usb_user_guide.pdf. Accessed 10 Feb 2018
  34. Lyons M, Brandis K, Callaghan C, McCann J, Mills C, Ryall S, Kingsford R (2017) Bird interactions with drones, from individuals to large colonies. bioRxiv.  https://doi.org/10.1101/109926 Google Scholar
  35. Malm A (2018) The progress of this storm: on society and nature in a warming world. Verso, LondonGoogle Scholar
  36. Maruyama N, Oka M, Ikegami T (2013) Creating space-time affordances via an autonomous sensor network—semantic scholar. In: 2013 IEEE symposium on artificial life (ALife), pp 67–73Google Scholar
  37. Maruyama N, Doi I, Masumori A, Oka M, Ikegami T, Vesna V, Taylor C (2014) Evolution of artificial soundscape in a natural environment. In: Exploiting synergies between biology and artificial life technologies: tools, possibilities, and examples at ALIFE, p 14Google Scholar
  38. Massumi B (2002) Parables for the virtual: movement, affect, sensation. Duke University Press, DurhamCrossRefGoogle Scholar
  39. Milk C, Tricklebank B, George J, Meyers A, Chasalow B (2012) The Treachery of Sanctuary [Projection]. Traveling InstallationGoogle Scholar
  40. Mototake Y, Ikegami T (2015) A simulation study of large scale swarms. SWARM 2015, Kyoto University, Kyoto, pp 446–450Google Scholar
  41. Mung J, Mann S (2004) Using multiple graphics cards as a general purpose parallel computer: applications to computer vision. In: Proceedings of the 17th international conference on pattern recognition (ICPR2004). Cambridge, United Kingdom, vol 1, pp 805–808Google Scholar
  42. Nagel T (1974) What is it like to be a bat? Philos Rev 83(4):435–450.  https://doi.org/10.2307/2183914 CrossRefGoogle Scholar
  43. Norihiro M, Doi I, Masumori A, Oka M, Ikegami T, Vesna V, Taylor C (2014) Evolution of artificial soundscape in a natural environment. In: Exploiting synergies between biology and artificial life technologies: tools, possibilities, and examples, ALIFE, p 14Google Scholar
  44. OpenGL (2018) OpenGL—the industry standard for high performance graphics. https://www.opengl.org/. Accessed 10 Feb 2018
  45. Pijanowski B, Villanueva-Rivera L, Dumyahn S, Farina A, Krause B, Napoletano B, Pieretti N (2011) Soundscape ecology: the science of sound in the landscape. BioScience 61(3):203–216.  https://doi.org/10.1525/bio.2011.61.3.6 CrossRefGoogle Scholar
  46. Pompei FJ (1999) The use of airborne ultrasonics for generating audible sound beams. J Audio Eng Soc 47(9):726–731Google Scholar
  47. Postel J (1980) RFC 768: user datagram protocol. https://tools.ietf.org/html/rfc768. Accessed 10 Feb 2018
  48. Reas C, Fry B (2006) Processing: programming for the media arts. AI & Soc 20:526.  https://doi.org/10.1007/s00146-006-0050-9 CrossRefGoogle Scholar
  49. Renderheads (2018) AVPro Video. Retrieved February 10, 2018, from http://renderheads.com/product/avpro-video/
  50. Reynolds CW (1987) Flocks, herds, and schools: a distributed behavioral model. In: ACM SIGGRAPH computer graphics, pp 21–25Google Scholar
  51. Sasahara K, Cody ML, Cohen D, Taylor CE (2012) Structural design principles of complex bird songs: a network-based approach. PLoS One 7(9):e44436.  https://doi.org/10.1371/journal.pone.0044436 CrossRefGoogle Scholar
  52. Schmeder A, Freed A, Wessel D (2010) Best practices for open sound control. In: Linux audio conference, Utrecht, NLGoogle Scholar
  53. Shi C, Gan WS (2010) Development of parametric loudspeaker. IEEE Potentials 29(6):20–24.  https://doi.org/10.1109/MPOT.2010.938148 CrossRefGoogle Scholar
  54. Sick-Leitner M (2015) Deep Space 8k—the next generation—Ars Electronica feature. https://www.aec.at/feature/en/deep-space-8k/. Accessed 10 Feb 2018
  55. Simon T (2014) Birds of the West Indies. Gagosian Gallery, Los AngelesGoogle Scholar
  56. Sosolimited H, Design P (2016) Diffusion Choir. Biomed Realty, CambridgeGoogle Scholar
  57. Stengers I, Goffey A (2015) In catastrophic times: resisting the coming barbarism. Open Humanities Press, LondonGoogle Scholar
  58. Sullivan GJ, Ohm JR, Han WJ, Wiegand T (2012) Overview of the high efficiency video coding (HEVC) standard. IEEE Trans Circuits Syst Video Technol 22(12):1649–1668.  https://doi.org/10.1109/TCSVT.2012.2221191 CrossRefGoogle Scholar
  59. Sumitani S, Suzuki R, Arita T, Naren, Matsubayashi S, Nakadai K, Okuno HG (2017) Field observations and virtual experiences of bird songs in the soundscape using an open-source software for robot audition HARK. In: abstract book of 4th international symposium on acoustic communication by animals, pp 116–117Google Scholar
  60. Suzuki R, Matsubayash S, Hedley R, Nakada K, Okuno HG (2017) HARKBird: exploring acoustic interactions in bird communities using a microphone array. J Robot Mechatron 29:213–223CrossRefGoogle Scholar
  61. Takatori H, Enzaki Y, Yano H, Iwata H (2016) Development of the large scale immersive display LargeSpace. Nihon Virtual Reality Gakai Ronbunshi 21(3):493–502Google Scholar
  62. Taylor C, Brumley JT, Hedley R, Cody ML (2017) Sensitivity of California thrashers (Toxostoma redivivum) to song syntax. Bioacoustics 26:259–270.  https://doi.org/10.1080/09524622.2016.1274917 CrossRefGoogle Scholar
  63. Thrift N (2007) Non-representational theory: space, politics, affect. Routledge, LondonGoogle Scholar
  64. Tuchman M (1971) Art and technology: a report on the Art and Technology Program of the Los Angeles County Museum of Art, 1967–1971. Los Angeles County Museum of Art; distributed by the Viking Press, New YorkGoogle Scholar
  65. Unity3D (2018) Unity Game Engine [online] http://unity3d.com/. Accessed 1 Feb 2018
  66. Wainwright J, Mann G (2018) Climate Leviathan. Verso Books, LondonGoogle Scholar
  67. Wark MK (2016) Molecular Red: theory for the Anthropocene. Verso Books, LondonGoogle Scholar
  68. Wilson S, Cottle D, Collins N (2011) The Supercollider Book. The MIT Press, CambridgeGoogle Scholar
  69. Woodford C (2018) Directional loudspeakers—how they work. http://www.explainthatstuff.com/directional-loudspeakers.html. Accessed 19 Feb 2018
  70. Yoneyama M, Fujimoto J-i, Kawamo Y, Sasabe S (1983) The audio spotlight: an application of nonlinear interaction of sound waves to a new type of loudspeaker design. J Acoust Soc Am 73(5):1532–1536CrossRefGoogle Scholar
  71. Yu K, Yin M, Luo J-A, Wang Y, Bao M, Hu Y-H, Wang Z (2016) Wireless sensor array network DoA estimation from compressed array data via joint sparse representation. Sensors (Basel, Switzerland) 16(5):686.  https://doi.org/10.3390/s16050686 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  • John Brumley
    • 1
    Email author
  • Charles Taylor
    • 2
  • Reiji Suzuki
    • 3
  • Takashi Ikegami
    • 4
  • Victoria Vesna
    • 5
  • Hiroo Iwata
    • 6
  1. 1.Program in Empowerment Informatics, EMP OfficeUniversity of TsukubaTsukubaJapan
  2. 2.Professor Emeritus, Department of Ecology and Evolutionary BiologyUCLALos AngelesUSA
  3. 3.Graduate School of InformaticsNagoya UniversityNagoyaJapan
  4. 4.The Graduate School of Arts and SciencesUniversity of TokyoTokyoJapan
  5. 5.Department of Design Media ArtsUCLALos AngelesUSA
  6. 6.Faculty of Engineering, Information and SystemsUniversity of TsukubaTsukubaJapan

Personalised recommendations