Keywords

Hearing Gravitational Waves

On February 11, 2016, the New York Times published an online video with the attention-grabbing title “Ligo Hears Gravitational Waves Einstein Predicted.” 1 The video was embedded in a news article announcing that a group of scientists behind LIGO , the Laser Interferometer Gravitational-Wave Observatory ,

had heard and recorded the sound of two black holes colliding a billion light-years away, a fleeting chirp that fulfilled the last prediction of Einstein’s general theory of relativity. That faint rising tone, physicists say, is the first direct evidence of gravitational waves , the ripples in the fabric of space-time that Einstein predicted a century ago. 2

The original press release about the gravitational waves , by the National Science Foundation, explained that during the collision, “a portion of the combined black holes’ mass” had been converted to energy “according to Einstein’s formula E  = mc2 “and had been emitted as a “strong burst of gravitational waves .” LIGO had observed this by sonifying the measurements of the arrival time of laser light split into two beams, each reflected by one of two mirrors at the end of the arms of LIGO ’s L-shaped interferometer. A small time lag between the arrival time of the light beams, sonified in terms of frequency, expressed “the tiny disturbances the waves make to space and time as they pass through the earth.” 3 Or, in the sonically rich words of David Reitze , LIGO Lab Executive Director at Caltech, during the press conference :

Now, what LIGO does is that it actually takes these vibrations in space-time, these ripples in space-time, and it records them on a photo-detector, and you can actually hear them. … It is the first time the universe has spoken to us through gravitational waves . And this is remarkable. Up to now we have been deaf to gravitational waves, but today we are able to hear them. That is just amazing to me. 4

It was thus the sonification of visualized measurements that gave the New York Times as well as LIGO itself reason to talk about hearing evidence of a phenomenon journalists and scientists alike considered fundamental to our understanding of nature. A rising tone signaled that Einstein had been right. 5

As this chapter will show, the references to sound in the publicity on gravitational waves is a fairly typical, if spectacular, example of how listening continues to “pop up” as a strategy for acquiring knowledge despite the shaky epistemological authority of sonic skills in the sciences. Given its contestation, what makes listening, and notably for the purposes of monitory and exploratory listening in our taxonomy , a felt necessity or appealing feature of the sciences? In this chapter, I answer that question by relating the recurrent return of sound and listening in the sciences to three issues: the rise of digital sound technologies and the portability and versatility of these tools ; the need for somatic vigilance in industrial settings, operating theaters , and laboratories ; and the construction by both scientists and artists of a public fascination with the auditory sublime .

The Digital: Portable and Versatile Sound Technologies

I have explained in the previous chapters how the portability of sound recording instruments affected what could be recorded—birds in fields accessible to trucks, for instance. It also affected who could be involved; thus, the rise of magnetic tape recorders for consumer use in the 1950s enabled ornithologists to create a moral economy of exchange with amateurs . With the rise of digital technologies , sound recording’s portability and versatility acquired profoundly new meanings. Unprecedented levels of virtuosity could be attained in such matters as switching between analytical, synthetic, and interactive listening.

In ornithology , field research is a new experience now that portable digital databases of bird sounds on iPods and iPads enable on-the-spot comparison between what has just been heard and what can be found on the database. At times, this new option leads to false reports of bird observations , when ornithologists or amateur bird spotters assume they have heard a bird singing whereas in fact it was just a digital sound device playing the recording of bird vocalization . The technology does, though, allow recorded bird calls to be used to attract individuals of the same species to a particular spot during fieldwork, or prompt competing male birds to call in response to the taped ones. As a form of explorative interactive listening, playing recorded bird sound to elicit vocalizations had already taken off in the age of the magnetic tape recorder , but the larger numbers of recordings available to ornithologists today have changed the game. Moreover, portable computer devices and free audio imaging software also help birdwatchers to, as one ornithologist put it, see what they hear and hear what they see synesthetically, indoors and outdoors (Bruyninckx 2013: 167).

Alexandra Supper (2012, 2015: 451ff) has sketched three ways in which the rapid expansion of sonification initiatives since the early twenty-first century has been assisted by digital technologies . First, the digital age tremendously increased the options for sharing and circulating sound files, and thus for carrying out sonification . Earlier sonification enthusiasts had used flexi discs in the 1970s, or compact discs in the 1990s, as appendices to paper publications. The introduction of digital audio storage, and notably the MP3 format in 1996, made the distribution of recorded sound faster, less costly, and almost effortless, although MP3 files may, in principle, be protected against free distribution by means of digital rights management. The “preservation paradox ” of digital technology —which enables the easy transfer and storage of digital audio, yet undermines prospects of long-term retrieval due to the rapid introduction of new formats (Sterne 2009: 64–65)—is a potential threat to the robustness of sound-for-knowledge, just as it is for visual digital information. But the fact that MP3 files can be inserted into electronic publications and integrated with texts and images, rather than having to be attached as addenda, has enhanced the epistemological credibility of sound. The advantages of inscription as set out by Latour , such as superimposition, are no longer restricted to the visual representation of data: “synaural ” presentation is now possible as well—it has already been flagged in previous chapters. And as Florian Dombois , one of Supper’s interviewees, put it: “A sound has to be published in order to count as an academic argument” (cited in Supper 2015: 452).

Second, the rise of digital tools for processing and creating sound, notably sound synthesis tools such as SuperCollider , MaxMSP , and Pure Data , have extended the possibilities for flexibly tweaking the parameters of sonification . Rather than simply transposing time-series data to waveforms in the human auditory range (that is, “audification ”), sound synthesis allows for “much more complex mappings between data and sound and for many more audio parameters (such as pitch, timbre , duration , brightness and panning )” (Supper 2015: 452). Because many of these tools have their origins in the world of computer music, they also make the sonifications more aesthetically accessible. The downside of their roots in music, however, is that the tools were not originally designed to handle large data sets, the facilities for which have to be added and kept compatible with changing software packages.

This is all the more important given that, third, growing academic interest in and availability of big data have added urgency to the quest for sonification . A long-standing promise of sonification has been to create order in the chaos and abundance of data. To be sure, this does not mean sonification offers straightforward solutions. As discussed in the previous chapter, the sonification community is currently better equipped to give audibility to data that are already well understood than to extract new patterns and information from unfamiliar data (Supper 2015: 458). Nonetheless, the versatility of digital technologies has opened up many novel ideas and practices for the “transduction ” (Helmreich 2012: 160) and “synesthetic conversion ” (Mody 2012: 225) of signals from one sensory mode into another, thus bridging older divides between distinct disciplines and social domains. I will return to these forms of bridging in this essay’s last chapter.

Somatic Vigilance: Attuning to Instruments and Time

Our case studies offer ample examples of listening for monitory purposes to the sound of machines in factories or to research instruments in laboratories. In twentieth-century industrial settings, workers considered the auditory surveillance of machines so crucial to their performance that they even hesitated to accept hearing protection (Bijsterveld 2008, 2012). Nowadays, such protection is compulsory in many countries, and most industrial machines are monitored using computer screens. In certain situations, though, operators may still listen to machines as auditory monitoring. An example comes from the observations and interviews carried out by Stefan Krebs in 2013 at Frogmore Mill in Hemel Hempstead, UK . Frogmore is now a heritage institution, but until 2000 it was the world’s oldest mechanized paper mill still in operation. One of the operators referred to his experience on occasions when

the noise was so great, that, you could, if you were tired, as you often were late at night, the noise , you would begin to hear things, so you begin to hear choirs or orchestral music, that kind of thing, just, just a kind of dream or an auditory daydream would come about, and it’s something I actually found that I can control, so I could actually hear pieces of music that I knew well … and clearly what was going on was my brain … filtering out what it didn’t need, and it wasn’t the same as in a quiet room imagining the music, in that noise I was actually hearing it. (Operator cited in Krebs 2017: 43)

Interestingly, the operator additionally said that when he wandered off into auditory daydreams, he did not stop deploying his listening skills to monitor the machine’s functioning. In fact, whenever his musical experiences were interrupted, he would know that something significant had changed. Apparently, he first transformed machinery noise into music; then, the musical patterns or breaches in those patterns informed him of the proper or problematic functioning of the machine. As Joy Parr showed in Sensing Changes, such rhythms may be deeply ingrained in people’s corporeal experiences of the locations they inhabit (Parr 2010, 2015: 18).

Whereas Stefan Krebs interviewed paper-mill operators about their memories of sensory skills in the recent past, sociologist Sarah Maslen (2015) interviewed fifteen doctors from different disciplines about their listening experiences in the present day. In one of these interviews, an orthopedic surgeon explained that arthritis involves the loss of “the layer of cartilage that allow[s] for frictionless movement” in the joints. This is not visible on X-ray , but announces itself through “creaks” and “grates,” sounding “like wheels that need oil.” These and other bodily sounds are also relevant during orthopedic surgery. When surgeons are drilling bones to insert implants, for instance, changes in pitch tell them they have reached hollow or outer areas of the bones, helping them to navigate through bodies during operations (Maslen 2015: 61–62). This form of monitory listening enables surgeons to distinguish spatially between right and wrong: Yes, I need to be here, or No, it’s the wrong spot.

In principle, this monitory quality also holds for navigation sounds “that are artificially produced and played back through magnetic speakers or piezoelectric units in medical equipment to indicate surgical operative tasks” (Schneider 2008: 2). The value of alarm signals generated by medical instruments in operation theaters and intensive care is felt to be less evident, however. Although auditory signals such as buzzers , beeps, sirens, pulses, or chimes in theory call the staff’s attention to problems in the patient’s condition or procedural faults, it is by no means clear whether alarm sounds actually enhance performance in hospitals and similar settings. The answer appears to depend not only on the character of the sounds, but also on the type of work and the workload of the people who must respond to the alerts. If staff have a high visual workload, for instance, auditory alarms seem to be useful (Edworthy and Hards 1999: 604), but when the overall workload is too high, operators may start to rely too strongly on alarms—whether visual or auditory (Endsley and Jones 2012: 155–157).

The proliferation of auditory medical alarms since the 1980s, partly due to a perceived need to protect the liability of the medical instrument manufacturers, has complicated the alarms ’ use. Anesthesia machines, artificial ventilators, blood warmers, electrosurgical units, hyperthermia systems, infusion pumps, monitoring systems, and pulse oximeters all have built-in alerts. Some of these may be masked by the cacophony of sounds, and even when the alarms are noticed, it may not be easy for medical staff to identify their sources or interpret their urgency correctly. Confusingly, different manufacturers offer different alarm sounds for the same variables, while different types of devices may generate similar sound alarms depending on their make.

A 2006 article discussing the interpretation of thirteen medical alarm signals by clinical engineers with different levels of experience concluded that the overall recognition rate was a mere 48 percent (Takeuchi et al. 2006). The International Organization for Standardization issued several recommendations on standardizing alarm sounds in the 1990s, but real standardization has failed to materialize, inspiring scientists to design aids such as an alarm sound database and a simulation set-up for training operating room attendants. The simulation enables users to listen to alarms in the context of an artificial hospital soundscape featuring speech, doctors ’ beepers, automatic doors, and music (Takeuchi et al. 2006; Schneider 2008). Auditory navigation and auditory surveillance are thus still significant monitory listening skills in hospital , though ones endangered by an over-abundance of alarms.

In fact, the large number of alerts may also elicit new sonic skills . Chapter 3 discussed experienced nurses tightening intensive care unit alarms to reduce the overload of alerts: an example of interactive monitory listening. Patients have less control, however, and Tom Rice reports that his “patient interlocutors often experienced the wards as being disturbingly noisy,” alarms being one of the sources of such noise (2013: 29). Several of the nurses Anna Harris talked to during her fieldwork at an Australian hospital nostalgically evoked the relative tranquility of the intensive care unit (ICU) of the past. She cites one of them:

Now there is an alarm for everything and they are forever tightening the alarms. The noise is horrific now. … It’s changed so much. There is no respite any more. [phone rings nearby] There is a sound for everything—to get in a door and another click when you leave. The [hand cleanser] dispenser makes a noise too! I remember the sound of billows in the ICU—it was quite peaceful, like white noise … I could go to sleep to that noise . Gone are the days of peaceful ICU. (Field notes Anna Harris , Melbourne, October 21, 2013, cited in Harris 2015: 25)

It is worthwhile reflecting further on one of the reasons for this plethora of alarms, medical instrument manufacturers’ fear of liability for non-functioning instruments. This implies an important new context for the epistemic relevance of sound: the alarms indicate both the experts ’ dependence on black-boxed machines—as manufacturers set the alarms —and, in some contexts, the need they feel to constantly monitor and discuss the machines’ performance. Joeri Bruyninckx has shown the significance of these phenomena for sound and listening in modern science labs. In recent years, lab experiments have increasingly been organized around automated tools and expensive instruments that function as platforms for large numbers of researchers from different fields. Bruyninckx studied the handling of automated experimental protocols, carrying out extended ethnographic observations of and interviews with researchers and technicians in a Dutch lab for surface science , plasma science , and materials science (anonymized as PlasmaLab) 6 ; he also examined user practices concerning the same type of platforms in three US labs for five months. The American labs worked with mass spectroscopic and nuclear magnetic resonance (NMR) techniques, for crystallographic characterization and the definition of molecular structures.

At first glance, it might be expected that using instruments with commodified software reduces reliance on researchers’ sensory skills for monitoring the instruments . Indeed, programmed commands have been introduced to boost productivity and efficiency, and to standardize the experimental set-ups and enhance replicability. Bruyninckx (2018: n.p.) has shown, however, that these intentions do not mean the instruments are always or entirely trusted—such trust needs to be actively constructed and constantly reaffirmed. In hospital operating theaters , responsibility for the proper functioning of equipment seems to be delegated to the instrument manufacturers and the alarms , but the situation in experimental research labs is different.

Several of the researchers at PlasmaLab, for instance, had extensive experience with custom-built instruments , making them very aware of the effects of in-built parameters on the experimental results and keen to “open the hood” of ready-made tools , for instance by contacting manufacturers. Even without such experience, many of the researchers observed and interviewed considered the “knowability ” of instruments key to assessing the set-up’s stability and the reliability of experimental outcomes. “Sometimes,” one doctoral researcher noted, “you actually think that the reactor has a personality” (Bruyninckx 2018: 11). For him and many of his colleagues, this means being aware of the instruments ’ whims in order to grasp unexpected outputs or breakdowns and to decide whether an experiment has succeeded or not. Understanding the internal working of instruments additionally contributes to researchers’ independence from technicians, which in turn helps to build up their trust in their own and their peers’ qualities as experimentalists—trust that also arises from the ability to answer critical queries about data in departmental meetings, for instance.

In pursuit of knowability , researchers often want to stand next to the instruments that provide their samples, hoping to materially witness the instruments’ functioning on the spot. These practices embody “somatic vigilance ,” a “guarded attentiveness towards the technical conditions under which data are produced and interpreted.” Somatic vigilance is more than the organized skepticism considered typical of science: it is a “tactic used by researchers to calibrate trust judgments” within the material, social, and knowledge regimes of their research settings (Bruyninckx 2018: 3, 7). Bruyninckx illustrates it with sensory examples of monitoring. These include reading graphs and numbers indicating the instruments ’ output, but also touching the instruments to check for heat or vibrations and listening to their sounds:

The setup is automated so that it can be operated fully via the desktop monitor, but I always listen. You know that when you enter this [value], you should hear this sound … . I don’t trust the button {pause} you know, it is just a machine, something can go wrong. When I hear it, I know it for certain. (Field notes, 11 July, 2013, cited in Bruyninckx 2018: 18)

Similarly, the operational rhythm of the lab’s entire soundscape tells researchers whether experiments done by others are running smoothly or signal unsafe situations. These examples show once again that the purpose of monitory listening can call for both synthetic listening (to all audible sounds at the same time) and analytic listening (focusing on one or a few sounds amidst everything that is audible).

Somatic vigilance is not limited to science researchers. During his fieldwork at the American chemistry and biology labs, Bruyninckx closely followed technicians in their day-to-day work, and happened upon the following instruction note near one of the instruments :

Attention all Bruker 600 Users!!!

If you do not hear the cryoprobe’s helium pump

“chirping”, DO NOT use the instrument!

STOP

This means the probe is not working properly

And you will NOT get a spectrum.

Thanks. 7

In this as in the other labs, technicians are responsible for the smooth operation of the machines and systems that form the heart of the workflow. Bruyninckx noticed that as they fulfilled this responsibility, technicians commonly rely on their experience of what research instruments “should look, feel, smell, and sound like” under normal circumstances, recalling the somatic vigilance of the plasma researchers just discussed. Some technicians not only acquire their own situated and embodied skills, but also train the user-researchers by calling their attention to these sensory specifics, among other things warning them that relevant sounds may be masked by the noise of other laboratory instruments . User instructions to “‘listen for a click,’ ‘wait for the pzzzz,’ or ensure that no ‘hiss’ or ‘chirping’ can be heard” aim to persuade users to monitor the instruments’ functioning, but they also, or especially, encourage responsiveness to the rhythms of the machines more generally. They help the technicians to synchronize “users’ temporal expectations with their instruments ’ rhythms by redirecting their attention, inviting them to open their bodies and allow themselves to be temporarily affected by an instrument in use” (Bruyninckx 2017: 834).

Such synchronizations, Bruyninckx argues, are vital to today’s lab culture. As large, shared, and expensive instruments such as NMR proliferate, their efficient and cost-effective use has become increasingly important. This means that the platform’s “organizational time ”—its temporal management—needs to be attuned to its “instrumental time .” Bruyninckx distinguishes three forms of organizational time . “Scheduled time ” refers to the time slots (for example: ten minutes during prime time) assigned to individual users or groups of researchers working with the platform technologies. In “billing time ,” these slots are translated into costs for particular departments by computer systems that track log-in and log-out shifts, while “strategic time ” reflects management decisions on “long-term research activities, research lines, and instrumental acquisitions.” Instrumental time , in contrast, alludes to the “sequences, rhythms , and durations in activities of repair, maintenance and operations” that are specific to particular research instruments and protocols (Bruyninckx 2017: 828–830).

The work done by technicians is crucial for aligning the three forms of organizational time with instrumental time . When the replacement of a machine is postponed in strategic time , for instance, wear and tear is likely to affect its performance, and therefore instrumental time . Technicians often play a vital part in tackling and resolving such slippages, and their instructions on attentive monitory listening, such as “wait for 3.3 minutes until the noises stop,” are particularly important (Bruyninckx 2017: 832–833). These synchronizations and monitory alerts, together with technicians’ prioritization of particular tasks and their repair work to prevent system breakdowns, are by no means phenomena at the margins of contemporary science—they are at its very heart.

Auditory Sublime: Promising Wonder and Awe Through Sound

“Popping up” is exactly what has been happening with the sonification of scientific data since the turn of the twenty-first century. Alexandra Supper had no problems at all gathering many recent cases in fields as diverse as the geosciences , neurology , high energy physics , genetics , astrophysics , and microbial ecology (Supper 2012, 2014, 2015). 8

Some of these sonification projects have been initiated by artists. An example is the sound installation The Place Where You Go To Listen, created by composer John Luther Adams in 2006 and located at the Museum of the North, University of Alaska. Among its sounds are “sustained chords” that sonify data on the position of the sun, and “deep rumbles” sonifying registrations from several of Alaska’s seismological stations (Supper 2012: 39–40). Other events have been organized by researchers, such as Gerold Baier and Thomas Hermann ’s sonification of the electroencephalogram of an epileptic seizure at the Wien Modern festival in 2008 (Supper 2014: 34–35). In a third group of sonifications, scientists and artists collaborate. For LHCSound, online since 2010, physicist Lily Asquith worked with software specialists, the musician Richard Dobson , the composer Archer Endrich , and others to sonify particle detection data, including data about the Higgs boson so famously reported in 2012 (Supper 2014: 40). The project’s legacy can still be found on the website of the Large Hadron Collider (LHC) , the world’s largest and most powerful particle accelerator, at CERN , the European Organization for Nuclear Research . It features CERN scientists playing musical instruments such as the harp, clarinet, and violin as “LHChamber Music,” while reading scores that are sonifications of their LHC data. 9 These and other examples often sound like contemporary classical compositions, ranging from mildly to wildly avantgarde .

When Supper talked to the scientists involved in sonification projects, however, many of them said that in their day-to-day work, understanding data through sound was actually less important than the media coverage of talks, concerts, festivals , and web events suggested. A case in point is sonification in asteroseismology , a subfield of astrophysics that aims to understand the internal structure of stars by observing their pulsations. These observations are relevant because the stars’ variations in brightness are thought to result from oscillations in the ionization equilibrium in their outer layers. In turn, the oscillations and their frequency spectra are dependent on the stars’ mass and radius. Oscillation modes can therefore give scientists information about the properties of the stars’ cores, which are hard to study any other way. In lectures for students and talks for general audiences, astrophysics professor Conny Aerts frequently explains these phenomena using stellar sonifications: “synthesized, sped-up sounds based on the visual observations of stellar oscillations” (Supper 2012: 43). She has also collaborated with composer Willem Boogman , whose piece Sternenrest sonifies the data on one specific star and uses surround sound to position the audience right in the middle of those data. Yet Aerts emphasizes that she and her colleagues tend to study oscillations visually rather than sonically. The sonifications are almost exclusively employed to introduce students to astrophysics or reach out to the general public.

Why is it, though, that scientists find sonification so helpful in those communicative situations? And what motivates artists to use it? In the world of modern music, Supper explains by reference to musicologist Richard Taruskin, sonification responds to a twentieth-century trend to regard music as a canvas for the objective and the material rather than as the expression of individual, Romantic subjectivity. Adams , for instance, defines The Place Where You Go To Listen as art produced by natural phenomena. Against that background, it is understandable that artists often take the exact relationships between data and sound more seriously than scientists do when presenting sonifications to the public (Supper 2014: 39–40). Additionally, sonification promises to compensate for the loss of “deep structure” that audiences began to experience when electronic music departed from classical music (artist John Dunn cited in Supper 2014: 41)—or, perhaps, the painful void faced by many listeners to electronic music when it rendered conventional harmonic and rhythmic patterns obsolete.

For scientists, popular sonifications embody another promise: that of evoking an “auditory sublime ” in those who listen (Supper 2012: 71, 2014: 34). Traditionally, the Kantian sublime stands for experiences of “infinity and unimaginable greatness” elicited by natural phenomena such as storms or mountains—observed at a safe distance, yet with an emotional ambivalence in which awe and pleasurable wonder are mixed (Supper 2014: 44–45). The notion of the sublime has since been applied to the experience of art, architecture, and grand technologies as well. Supper identifies it in the rhetorical, musical, technical, and spatial means by which scientists foster sonification in collaboration with artists. Auditory and musical metaphors abound in the texts accompanying those sonifications, recalling Yolande Harris ’s (2012) findings when she examined bioacoustics research on the underwater sounds of whales. Natural phenomena, Supper shows, are said to “speak” to their audiences; they have something to “tell.” The synthesis of proteins adheres to “a genetic score,” stars have a “voice” and “sing,” and humans can “eavesdrop on the brain” (Supper 2012: 54–57). References to the sublime are ubiquitous, in “the wonders of the cosmos, the dangers of the earth, the inconceivability of particles, the powers of genes and the complexity of the brain” (Supper 2014: 47).

Many sonification makers present sound as the perfect means to elicit sublime experiences and enable listeners to emotionally connect to the mysteries of nature. The three-dimensionality of sound is regarded as vital to these experiences—sound offers a particular sense of presence, immersion, and intimacy with natural phenomena, without actually getting dangerously close. Such immersion can be enhanced technologically, as with surround sound speakers. At the same time, the sonifications are intended to enthrall: the sounds can be loud and uncomfortable, but they may equally be “eerie” and “otherworldly,” “chilling,” and “disquieting” (Supper 2014: 47). Together, these dimensions offer virtual access to and deeper understanding of natural phenomena such as stars, volcanoes, and particles “that are too far away, too close-by, too big, too small, too high, or too low to be experienced in an unmediated way” (Supper 2012: 72). It is the exciting expectation of the sublime, of experiencing nature in its most overwhelming forms through sound, that scientists believe will attract the general public to large-scale research projects. Rather than inviting those audiences to listen diagnostically and analytically, I would add, the scientists seem to aim for experiences of exploratory listening in its synthetic mode.

Conclusions

When participants at LIGO Lab’s press conference on gravitational waves said that the universe had “spoken” to us and that we could not remain “deaf” to the waves, this was clearly an instance of invoking the auditory sublime to attract the attention of the public. The LIGO scientists were trying to bring a complicated natural phenomenon closer to the wider audience while simultaneously inspiring a sense of respectful distance. In many natural science projects, an additional step has been collaboration with sound artists in order to help the public develop a fascination with the otherwise often intangible products of contemporary science.

The increasing versatility and portability of digital technologies is a very productive aspect of these processes. Not only does it assist the continual transduction of data from one sensory domain into another, from the visual into the auditory and vice versa; digital sound technologies also enable sound variables to be presented in ways that stage patterns in the data in more accessible forms than in the past. The rise of music software with easy-to-work-with interfaces has been instrumental in extending the options now available to sonification experts .

Increased digital versatility and promises of sublime experiences offer clues as to why displaying data in terms of sound continues to pop up and has even grown in importance despite the sonification community’s failure to find a “killer application .” And although sonification specialists still appeal for exploratory and diagnostic listening to data, monitory listening has gained increasing relevance in science labs—paying close attention to the rhythm of ever more expensive instruments can prevent them from running out of control and requiring costly repairs.

This chapter has also shown that attending to the role of sound allows us to articulate new developments in the sciences, such as the synchronization of work required in labs with large, grant-greedy set-ups or scientists’ use of sonification in outreach activities. But the wider mechanisms behind the recent rise of the sonic versions of somatic vigilance and the exploitation of the auditory sublime have not yet been set out in detail. They form the topics of this essay’s final chapter, on the relationship between listening for knowledge and issues of timing , trust , and accountability in the dynamics of science , technology, and society.

Notes

  1. 1.

    Dennis Overbye, Jonathan Corum, & Jason Drakeford, “Ligo Hears Gravitational Waves Einstein Predicted,” Video The New York Times Online, February 11, 2016, at http://nyti.ms/1V6puGS (last accessed July 25, 2016).

  2. 2.

    Dennis Overbye (2016), “Gravitational Waves Detected, Confirming Einstein’s Theory,” The New York Times Online, February 11, 2016, at http://www.nytimes.com/2016/02/12/science/ligo-gravitational-waves-black-holes-einstein.html?_r=0 (last accessed July 25, 2016).

  3. 3.

    “Gravitational Waves Detected 100 Years After Einstein’s Prediction,” February 11, 2016, at https://www.ligo.caltech.edu/news/ligo20160211 (last accessed July 25, 2016).

  4. 4.

    National Science Foundation, “LIGO Detects Gravitational Waves —Announcement at Press Conference (Part 1),” at http://mediaassets.caltech.edu/gwave#conf, at 10’38 ff. (last accessed July 25, 2016).

  5. 5.

    Listen to the LIGO-edited sound files at https://www.youtube.com/watch?v=KzVDlFpaRRk&sns=em (last accessed July 25, 2016).

  6. 6.

    This particular case study had two phases, an explorative one (two months) by Aline Reichow in 2011, and a systematic phase executed by Joeri Bruyninckx (nearly three months) in 2013. Bruyninckx interviewed fifteen researchers and technicians.

  7. 7.

    Field notes Joeri Bruyninckx , facility A, January 30, 2014.

  8. 8.

    For a few examples, see http://exhibition.sonicskills.org/exhibition/booth1/how-are-sonifications-made/ and http://sss.sagepub.com/site/Podcasts/podcast_dir.xhtml (both last accessed at August 14, 2017).

  9. 9.

    http://home.cern/about/updates/2014/10/cern-scientists-perform-their-data (last accessed January 20, 2017).