The creation and analysis of an audio fiction presuppose an understanding of what Schafer called the soundscape: the diegetic or extradiegetic (re)creation of a sound environment (“real” or “virtual”) in its physical and synesthetic aspects . Just as we can create, describe, and illustrate an environment through words and visual images, we are also able to use the sound to do so. The soundscape of an environment should not only be descriptive but also to capture its sensorial atmosphere.
The design of sound ambiance is vital to establish the setting of a narrative to BVIP. We can compare the soundscape of an audio fiction to the scenario of a movie for a sighted person. It is something that can act as a support to the plot, or it can be so crucial that it is developed as a character of the storyline. An example of this last case, in cinema, is The Shining (1977), a film adaptation of Stephen Kings’ book directed by Kubrick . In this classic horror picture, the hotel where Jack Torrance’s family gets stuck in after a winter storm (called Overlook Hotel) is as important as the story’s leading roles.
Each soundscape is composed of two elements, layers, and sound textures. By sound layers we can understand the different layers of sounds that appear in an environment, that is, the number of distinct sounds that can be heard and identified at the same time. Usually, the sounds come in different intensities of volume, and they can be understood as background or as foreground.
The sound designer’s sensibility is fundamental to decide how to choose between the varieties of resources. Not always the excess of distinct sonorities works in the soundscape; on the contrary, it can often result in noise pollution. Likewise, the intensity of the different volumes (measured in decibels) is fundamental to reinforce an idea of likelihood, when this is the case.
We can also identify a proper tone, a specific low or high tessitura, at each layer of sound that is called the sound texture. In other words, it is the characteristic or set of gathered features that make a given sound to be understood as such and not like any other.
Creating elaborate soundscapes requires extensive knowledge of sound design, besides the simple use of foley and sound effects. By sound design, we must understand the whole process of research, selection, application, and adequacy of a sound element. Some people nowadays use the term sound designer, rather than audio technician, to designate the professional responsible for this task, because, in a way, the sound designer makes with sound, what the designer traditionally does with the visual. In addition to sensitivity and technical mastery, the professional responsible for sound design should understand the characteristics and relationships between the five different elements of sound design: voice, music, foley, sound effects and “silence.”
By voice we understand all types of intelligible or unintelligible sound (above or below the semantic level) emitted by the human speech apparatus - and therefore, not only “speech.” Each voice is unique and depends mainly on three factors: physical, psychological and sociocultural. The first factor is defined in part by genetic inheritance, in part by different training techniques; the second, by the very way of being or acting of a person and its personality; and the third one is defined by the relationship of the self with specific groups in their life in Society.
Voice is undoubtedly one of the leading elements of audio fiction since it allows a rescue and an update of orality, mediated by sound technologies . The voice on the radio is a mediated orality because in the first instance the microphone picks up the voice in a way different from what the human ear will do, what explains the strangeness of hearing one’s own voice when recorded. Secondly, once recorded and stored, the voice can become raw material for further manipulation in its most diverse spectra. It can be thought and used in three different ways: speech, vocal possibilities and other sounds of the voice.
According to Klippert , up until the early 1960s, hörspiel was considered a work of art because it lacked many post-production resources. The use of music in the sound narratives was expanded until it came to be used in three main ways: as “ambient,” incidental music or as a leitmotiv. In the first case, the music serves to create or to reinforce a certain developed atmosphere:
“Composers dedicated to hörspiel, such as Hugo Pfister and Winfried Zillig, understood music in its dramaturgical function as a means of complementing, intensifying or structuring processes of spoken dramatic action. Hugo Pfister wrote that Hörspiel’s music has the power to give atmosphere to a scene, staying in the background, perhaps almost inaudible. (…) There was a consensus that the music of radio drama should never be an end in itself” .
Incidental music is the music that is “playing” in the scene, and that can be confused with the other sonorities of a given scene (diegetic). It differs from the previous category in its essence because in this case, it represents the sonority of a sound body present in the scene and not of a soundtrack “evoked” by the director (extradiegetic), although it can be used with a similar function.
Music as leitmotiv has the function of associating a musical sonority to a specific dramatic situation, returning, usually, countless times during history, whenever this situation (re)appears. Thus, the same narrative can present more than one leitmotiv, which, in turn, can be performed with different executions. However, it is not a standard procedure in radio plays, unlike television, the presence of numerous leitmotivs in the same story, because the listener does not count with the support of visual images to facilitate these associations about a dramatic situation.
The foley is equivalent to what many call sound effects - we shall see later the distinction between the terms - that is, the sounds emitted by the sound sources present in a given dramatic scene.
“The first and main difference between the various sounds our ears hear is the difference between noise and musical sounds… We realize that, generally, a noise is accompanied by rapid alternation between different kinds of sound. Think, for example, in the rattling of a carriage, in the granite of the pavement, in the water spreading and swarming in a waterfall or the waves of the sea, in the rustling of leaves in a forest. In all these cases we have fast and irregular, but distinctly perceptible, alternations between various kinds of sounds, which manifest intermittently” .
Two different categories can be thought about the use of foley: natural or artificial. The natural foley corresponds to the use of a sound whose actual sound reference exist “outside the studios” (diegetic sound). It may have, in this case, a figurative function, that is, the sound is used as an immediate correspondent to that sound source; in a scene that happens in a farm, it is heard the neighing of a horse, for example. Already as a function of metaphor, we use a sound recognized as a natural foley, outside its original context.
Artificial foley correspond to the creation of a sound whose immediate referent did not exist “outside the studios” (non-diegetic or extradiegetic sound). It is used as a sound representation of an unknown object or else merely as a more abstract and formal function, without being associated with a specific sound source, and yet not confused with music.
Sound effects are any intentional type of filter, distortion or manipulation that significantly changes the final shape of the sound in its physical constitution (wavelength). Currently, sound effects are numerous and can be produced and combined with each other ad infinitum in soundboard, mixer or specific software. The echo, for instance, is one of the most well-known sound effects: it’s a reflection that makes the sound arrive at the listener with a delay when we consider the moment of its emission.
The absence of audible sounds, that is, silence can also be considered as a form of expression, as we find it in minimalist works, notably in John Cage’s work. The American musician and composer, influenced by the teachings of Zen Buddhism, had two significant experiences to think about the role and importance of silence in a sound piece.
Absolute silence can only exist in situations in which sound cannot propagate - as in a vacuum, for example. Humans can listen to frequencies between 20 and 20,000 Hz, which means that no natural environment provides absolute silence for the human being. Even the quietest environment or situation that we may know or have experienced has a minimal sound unit, identified by audio professionals as “breath,” and varies at each specific location. In searching for the characteristics and possibilities of silence, John Cage remained for some time in an anechoic chamber, which prevents the reflection of sound waves and is isolated from any noise. About this experience, he tells:
“I thought there was something wrong with the room, some leak. I looked for the sound engineer and told him that the anechoic chamber had some problems: ‘I can hear sounds inside. How is it possible?’. Then he asked me to describe them; I described them as a bass and treble sounds. ‘Well,’ he said, ‘the treble sound is your nervous system, and the bass sound is the noise in your bloodstream.’ So, it became clear to me that silence does not exist, that it is a mental matter. The sounds you hear are probably silent if you do not want them. But they’re always ringing. There is always something to listen” .
In the second experiment, Cage composed, in 1952, a work entitled 4′33″, whose score has only time comments, without musical notes, which shifted the focus of interest out of work itself. In this manner, the composition incorporated the external characteristics of its surrounding reality, the sound to which attention is paid during its performance. The music is always the same, what changes is precisely the sound environment around you. This work will produce, each time it is executed, a different result depending on the time and space. About 4′33″, the composer stated:
“(…) Has changed my mind, of course, in the sense of appreciating all those sounds that I do not compose. I discovered that this piece is the one that is happening all the time. I wanted people to find out that ambient sounds are often more interesting than the sounds we hear in a concert hall” .
Therefore, silence in audio fiction should always be considered in its figurative sense, because in the studio and the reception environment of the listener, there will always be sound, desirable or not (unwanted noise). The “silence” can, therefore, appear in different forms with different functions. In principle, every form of speech (and sound, in a general way) is understandable only by the presence of “silence.” If speech were to form an eternal continuum, we would probably not be able to distinguish the changes and nuances present in its dynamics. Pauses during speech may still represent characteristics of the character with nervousness, anxiety, doubt, sadness, hesitation, etc. Also, “silence” can also be used as an element of absence, allowing the creation of an idea of the passage of time, of reflection by the listener or of expectation of a fact that will occur, creating a climate of suspense that precedes the action.
The use of these five elements of the sound design of an audio fiction usually takes place via the use of electronic or digital equipment, such as the soundboard, mixers, and, more recently, the computers. The art of audio fiction is born into this equipment through the intermediation of a technical-creative process between its first conception and its final transmission.
Besides this technical knowledge, that is essential to building a repertoire of possibilities to the sound designer; he must also consider the available data about BVIP sensory perception. To create exciting art pieces for people with disabilities, authors and producers must take this public into account from the beginning of the conception of audio fiction. “People who are blind use parts of their brain that normally handle for vision to process language, as well as sounds – highlighting the brain’s extraordinary ability to requisition unused real estate for new functions.” 
The concept of neuroplasticity in the blind demonstrates that sensory deprivation can cause the brain to modify itself to perfect its behavioral adaptation. In an experiment to investigate an alternative approach to visual rehabilitation, Ella Striem-Amitt used a sensory substitution device (SSD) to translate visual information using sounds. “A neuroimaging investigation of the processing of SSD information showed that despite their lack of visual experience during development, the visual cortex of the congenitally blind was activated during the processing of soundscapes (images represented by sounds)” .
Therefore, art can dialogue with science to create potentially playful audio fictions, but also stimulate the senses of BVIP audience. If the brain structure that is usually responsible for vision is used to process the sounds for blind people, we might think of audio fiction as a potential substitute to audiovisual entertainment for those who have vision loss. Working the soundscape with layers, textures, voice, and music - all of it punctuated by silence - can build audio fictions that reach BVIP in extraordinary ways, amplifying their sensibility and helping to expand their imagination.
Thus, after dwelling on the specificities of the audio fiction genre, we need to think about its current situation and consider its future transformations that may contribute to increasing its interest not only for BVIP but also for a wider audience.