Participants’ written feedback was prompted by the question How would you describe your experience with the augmented radio? Verbal feedback was captured on the video camera’s microphone, with participants being asked, if they were not initially forthcoming on their own accord, what they thought about the experience they had just undertaken. The bodily interactions between all pairs of participants and the radio installation were recorded on a single, wide-angle video camera that covered the interactional setting of the installation. From this view, participants were recorded entering, interacting with and leaving the setting of the installation. These recorded interactions were then indexed and thematically analysed.
In the written feedback, all but one of our ten participants described their experience as being either ‘interesting’ or ‘fascinating’. Two participants commented on the authentic ‘valve warm sound’ and the ‘period appropriate programming’, one commenting that ‘It was interesting to have new technology used to interpret a story about an older object’ and that they would like to see this technology used throughout museum.
Two participants made direct references to how their bodily movements were tuning the radio into the different broadcasts, and likening this to their practical experiences and memories of tuning a traditional radio receiver. There were comments made about being able to listen to individual broadcast material, as well as being able to construct or compose an individual soundscape experience from the different elements available, ‘picking up and losing the sounds’.
Additional positive references were made to the exploratory nature of the experience and its potential for being adapted as a maze, puzzle or mystery solving experience. One participant mentioned that they would have liked additional visual or textual information displayed on the phone’s screen to complement and provide information about the audio they were currently listening to. Furthermore, this feature was suggested as an additional means of navigation within the experience, to visually indicate the whereabouts of specific sounds or, if you miss something, provide a means by which it could be easily found again.
In relation to the verbal feedback, participants identified with the experience of using their proximity and their position in relation to the radio to find the broadcast material amongst the sound of static as being a metaphor for what it may have been like, or what it was like, to originally tune this type of analogue radio receiver. One participant commented:
It reminded me of how difficult and frustrating it used to be to tune a radio, because walking around the object was like tuning it.
Mentioned again in relation to the evoking of memory was the ‘Faithful reproduction of the warm valve sound’ indicating the potential importance of historical accuracy in the sonic delivery of the audio augmented object. Participants also expressed an interest in further levels of sonic engagement with the object, for example one participant mentioned that they almost expected to hear ‘more stations when pointing the phone at the tuning dial on the radio’. Two participants made reference to the ‘abstract’ nature of the experience and expressed interest in having a more literal and faithful relationship between the object and the delivery of the audio content. One participant commented on how the combination of the real object and the virtual audio triggered their imagination, much like listening to music being a catalyst for the mind’s eye, but suggesting that having a physical object in front of them which directly related to the content on their headphones in some way amplified this experience:
It just brings the sound out more, so you’re kind of just looking at the object, imagining things, the object’s actual sounds but without touching it.
A thematic analysis of the recorded video footage of all our participants’ interactions with the installation highlights a common interactional sequence as illustrated in Fig. 4. Generally, we observe eight distinct phases of interaction with the experience: preparation, familiarisation, exploration, investigation, focussed listening, second-level focussed listening, interruption and finishing. We see how, through a process of familiarisation, our participants quickly associate their bodily movements to the receipt of the spatialised audio sources, and then begin to explore the interactional setting to see what they can find. Subsequent to this, we witness our participants returning to investigate the location of some of these sources and engage in listening to them. This phase of focussed listening can sometimes result in a more attentive and engaged listening activity, observable by participants attempting to achieve a very close proximity to the location of the virtual sound source. We see how personal space and acceptable social proximities affect the process of virtual sound exploration and investigation, and how these predefined and mutually agreed proximities become more flexible during phases of engaged listening. We will now look at each identified interactional phase in a bit more detail.
Preparation
It is envisaged that the application will eventually be made available for listeners to download onto their own devices, enabling institutions to economically deploy such experiences. As such, familiarity and access to an appropriate device would be assumed, with the exception of the listening station approach discussed earlier. Although all participants automatically put on their headphones when they were ready to start, four participants needed to be reminded to put their headphones on the correct way around (essential for the correct orientation of the interactive surround sound). Observable from the recorded video of our participants’ interactions with the installation, we notice that two out of our ten participants required instructional prompts from the researcher to engage in an exploration of the space.
Familiarisation
This phase of familiarisation is distinguishable within the video recordings of our participants’ interactions by the various lateral movements our participants made. This seems to indicate an initial process of familiarisation with the association between bodily movement and the interactive positioning of the surround sound. These movements are often terminated by an acknowledging sign of appreciation, perhaps confirmation that the association has been recognised and understood. These lateral movements were observed being performed in a variety of different ways. Some participants swayed from side-to-side with their device held in alignment with their body and head. One participant waved their device in a lateral motion within a few moments of starting the experience and kept their body stationary whilst doing so. Another participant rotated their upper body in a lateral motion, and therefore also the device they were holding.
During this phase of familiarisation, a detachment of the focal gaze from the screen of the device was observed. In other words, the participant, through their particular process of positional familiarisation, was observing the physical object directly, rather than secondarily through the screen of the device.
This initial process of familiarisation of embodied interactions with spatialised audio via repeated lateral movement is consistent with Heller and Borcher’s AudioTorch [18]. Equally consistent with AudioTorch is the way in which it is capable of achieving a quick link between the hand and ear, a link that, in most part, remains unbroken and which can be observed by participants keeping their head aligned with the orientation of the device in their hand for the duration of the experience.
Exploration
After the brief familiarisation phase described above, all our participants can be observed within the video recordings of their interactions walking around the radio a full 360°, often pausing briefly at the locations of the audio signals, as indicated in Fig. 3. The direction of exploration, clockwise or counter-clockwise, most often determined by the first participant to start moving around the object, equally the length of the participants’ pauses at the locations of the audio signals were often determined by one participant resuming their exploration around the radio and prompting the other to resume theirs. This behaviour leads to each member of our pair of participants exploring adjacent locations of the sound source, as one member begins to travel to the location of the next broadcast, so does the other member.
This type of exploratory behaviour is observed amongst all our participant pairs, though there are some occasional exceptions. These exceptions appear to take place either when one of the participants has become engaged in the next phase of investigatory interaction, or if the participants appear to have a greater degree of social familiarity with each other, which can be indicated by an observed acknowledgment of each other and a sharing of an appreciation of the experience.
Investigation
Within this phase, we saw members returning to the locations of the audio broadcasts that they identified during their exploratory phase to investigate them further. We begin to see exploratory interpretations of the smartphone device as an interface to the audio content. These interpretations take on a variety of styles, with one participant holding their device aloft in an antennae-type fashion, directly reflecting the subject of both the virtual and the physical, another uses their device as a virtual microphone, moving it towards points of interest around the artefact. Others listen through the window of the screen, or rather, observe the radio through the screen of the device whilst listening through their headphones. During this phase of interactional activity, we also observe participants sharing the same audio sources and interacting with the installation in much closer proximity to each other.
Focussed listening
The investigation phase, where our members revisit the virtual audio broadcasts they identified within their exploratory phase, quickly develops into focussed listening. This is discernible within our video recordings of their interactions by the participant remaining stationary for a prolonged period for the first time since beginning their interactions with the installation. Evident within this interactional phase is an apparent disassociation with the physical object itself, with participants being observed closing their eyes or seemingly focussing on other more distant objects whilst they concentrate on the audio content. This behaviour is also documented in one participant’s written feedback, though it is interesting that despite the visual disassociation with the radio object, a strong sonic and physical attachment to it remains:
It was a fascinating experience. The object came alive, I entered a new sonic dimension where I was totally immersed. (I also closed my eyes repeatedly). I was trying to understand the context of sound content, the words of the man speaking.
Again, despite this visual disassociation with the object whilst engaged in these periods of focussed listening, these events initially take place at either the front or the back of the object, areas of distinct visual interest compared with the two rather plane wooden sides, with the exposed electronic and mechanical insides at the rear, and the TV screen and radio dials at the front. This behaviour is observed despite the location of the two audio broadcasts at the sides of the object, as shown in Fig. 3.
Second-level focussed listening
Throughout the recordings of all our pairs of participants, we witness moments when at least one of the participants engage in listening in much closer proximity to the object, often crouching down in order to obtain a physical position very close to the centre of the virtual sound source. This happens exclusively at the front or to the rear of the object where the object’s mechanical and electrical interfaces and inner workings can be seen respectively.
Interruption and finishing
The interruption of a participant’s activities, which often resulted in them finishing their interaction, resulted from one of the pair of participants deciding they have finished. Evident throughout all the recorded interactions, in all but one of our 5 pairs of participants, the end of participation is initiated by one participant removing their headphones, which prompts the other to do the same, even though the participants never started at exactly the same time. In the one event in which this did not happen, the other participant was engaged in second-level focussed listening.
We obtained from our video recordings that on average our participants spent 3′17″ exploring the installation. The combined length of unique audio content available to listen to was 6′ 20″ (excluding the looped background static recording). Therefore, if we assume that none of our participants listened to the same piece of audio more than once, we can say that on average our participants listened to 51% of the available audio broadcast material. Only one of our participants reported a potential fault with the system.
Within this model, we see some phases of interaction that resonate with the findings and observations from some of the previously mentioned related works in this area. This includes the use of virtually attached sound as an advertiser that draws users towards the audio augmented object for closer investigation. This is identified, though not specifically exploited, by Zimmerman and Lorenz [33] and could be said to be evident within our participants’ trajectories from exploration through to investigation and focussed listening. Furthermore, we see evidence of this second level of focussed listening within the work of Montan [22] with differently treated zones of reverb that are triggered upon a user’s close proximity to the audio augmented object, generating a soundscape within a soundscape and the feeling amongst participants of entering into a different space from outside. Based on these commonalities, we can perhaps begin to generalise more widely across cultural applications of AAR, as well as other potential applications, and perhaps provide some foundations of a theoretical model for attraction and immersion within applied AAR experiences.