Skip to main content

Evaluating the user experience of acoustic data transmission

A study of sharing data between mobile devices using sound


Users of smart devices frequently need to exchange data with people nearby to them. Yet despite the availability of various communication methods, data exchange between co-located devices is often complicated by technical and user experience barriers. A potential solution to these issues is the emerging technology of device-to-device acoustic data transmission. In this work, we investigate the medium-specific properties of sound as a data exchange mechanism, and question how these contribute to the user experience of sharing data. We present a user study comparing three wireless communication technologies (acoustic data transmission, QR codes and Bluetooth), when used for a common and familiar scenario: peer-to-peer sharing of contact information. Overall, the results show that acoustic data transmission provides a rapid means of transferring data (mean transaction time of 2.4 s), in contrast to Bluetooth (8.3 s) and QR (6.3 s), whilst requiring minimal physical effort and user coordination. All QR code transactions were successful on the first attempt; however, some acoustic (5.6%) and Bluetooth (16.7%) transactions required multiple attempts to successfully share a contact. Participants also provided feedback on their user experience via surveys and semi-structured interviews. Perceived transaction time, physical effort, and connectivity issues. Specifically, users expressed frustration with Bluetooth due to device selection issues, and with QR for the physical coordination required to scan codes. The findings indicate that acoustic data transmission has unique advantages in facilitating information sharing and interaction between co-located users.


The ubiquity of personal smart devices has led to a scenario in which we create and capture increasing amounts of content, generating a proportional demand to share that content with those around us. This near-field data exchange is typically facilitated by wireless connectivity, which allows a user to create an ad hoc connection with co-located peers, through which data can be shared with an individual or group. However, despite the variety and sophistication of connectivity options, a short-range exchange of information can still be a frustrating experience. Central to this problem is that peer-to-peer transactions are poorly supported by current smart devices. Despite the many options available for data transfer across devices, none is ubiquitous, cross-platform, and free of user interface friction (by which we mean the need to associate devices or establish a temporary network).

Of the options available, Bluetooth is perhaps the most widespread, but for ad hoc interactions it is susceptible to usability issues, as it requires a multi-step device discovery process [16, 18]. Alternative technologies such as Wi-Fi Direct and RFID-based near-field communication (NFC) exist, but are currently not feasible methods for peer-to-peer data exchange due to differing cross-platform implementations, leading to issues with device compatibility [48]. For example, although Wi-Fi Direct is now widely adopted on Android devices, it does not exist on iOS, where a propriety alternative is used [73], and the RFID hardware on iOS devices is not currently exposed to developers. A further problem within the user experience of NFC technologies is the opacity, or ‘visibility’ of the interaction, and lack of shared status feedback. User feedback and visibility of the system status are key usability heuristics [61, 62]; yet, in a casual ad hoc interaction, it may not be obvious to other participants what the common state is in order to progress the transaction. This can hinder the speed and success of a sharing activity, and is particularly critical when problems arise in data exchange, potentially amplifying user frustration when device discovery or data sharing fails.

Acoustic data transmission presents an interesting alternative to the aforementioned technologies. In this approach, digital information is encoded in audio signals for transmission between air-gapped loudspeakers and microphones. Audio playback is supported on a broad range of hardware, including all mobile phones, so it immediately offers multiple ways to generate, transport, receive and decode sound on today’s devices. It therefore offers a frictionless way to transmit data between devices by utilising existing sensors. Such acoustic data transmission technology can support one-to-many transactions, unlike many wireless mechanisms. It has the further advantage in that it is visible as an interaction media, providing shared insight into the status of a sharing activity.

Despite significant research into both applications and the underlying technology (which we discuss in Section 3), to our knowledge, there exists no research on the user experience of using acoustic data transmission, either directly or in comparison to alternative wireless communication technologies. In this work, we address this by asking whether acoustic data transmission solves the aforementioned limitations, provides a viable and user-friendly mode of near-field data exchange, and has the potential to enhance the user experience (UX) of exchanging data between devices. We use Chirp [15], an existing, commercially-available implementation of acoustic data transmission technology, which was developed in part by the authors.

In Sections 2 and 3 of this paper, we outline the opportunity for Chirp as a complement to other wireless technologies. We identify the benefits of sound, and thus how it can facilitate peer-to-peer transactions. In Section 4, we present a user study that compares Bluetooth (BLE), QR and Chirp in a simple peer-to-peer contact sharing task, evaluating the UX across the proposed technologies. The results suggest that Chirp can facilitate friction-free interaction between users and their devices, minimising the effort required and thus resulting in a more desirable UX. In summary, we present findings that identify Chirp as being as fast at individual sharing actions as QR codes, and significantly faster than BLE. Chirp also enables a sat-back interaction style that does not involve significant physical actions, similar to BLE, but dissimilar to QR, which involves physical manipulations of the devices and requires users to coordinate their positions in order to complete transactions.

Together, the quantitative and qualitative analysis from the user study suggest that there are significant opportunities in collaborative systems for data sharing using sound.

Peer-to-peer data sharing

Collaborative context

The use of smart devices to support co-located interaction has attracted considerable attention over the past decade [30, 50, 54]. Users typically have a significant amount of personal content on their phones that they wish to share with people around them, including, for example photos [19, 43, 52], calendars [22] and notes [51].

A key part of small group interaction is how the scope of the interaction is defined. At least four classes can be identified: interactions facilitated by a shared device (e.g. [26, 34]), speculative interaction facilitated by ad hoc discovery of potential partners (e.g. Nintendo StreetPass [63]), server-based proximity services (see [42]), and user-activated sharing. We will focus on user-activated ad hoc collaborations. We will assume that the devices are user-owned, that there is no third party sharing service, and that the devices are not already paired or otherwise linked.

User-activated sharing can be achieved in a number of different ways. Often, there is a pairing or device association step where the devices that will interact are identified [17]. This interaction can be as simple as pressing two virtual or real buttons simultaneously (e.g. pressing a physical button on a new game controller and pressing a virtual button on the console to pair it). More novel methods including shaking, touching or banging the devices (e.g. [14, 31, 33, 53, 58]), or using audio as a spatial trigger (e.g. [74, 77]).

To minimise friction, effort and interaction time, the ideal user experience for a sharing task is one in which minimal or no user intervention is required. For this reason, this paper will focus on technologies which do not require any prior shared actions or pairing before the exchange itself takes place. We will also limit the scope to scenarios of one-off data transmission, rather than continuous, synchronous interaction and omit multi-channel hybrid approaches in which audio (or other means) is used to pair an additional communicational channel (c.f. [71, 76]).

Device-to-device data sharing

There is a plethora of technologies for sharing information between devices. In the space of Internet of Things (IoT) devices, there may be the opportunity for only one or two technologies on any single device because of the requirements for low power and low cost [78]. In contrast, modern smart phones contain numerous sensors, such as motion sensors, cameras, various types of radio chips and microphones. Each of these may be used for ad hoc device-to-device communication.

The use of cameras to read coded information has a long history in collaborative technologies. Denso Wave developed the QR code in 1994; it is now an international standard [37], and many smart devices come with a QR code reader by default. Applications can generate QR codes on the fly, which allows the sharer’s screen to be used as the display surface, as long as the users involved in the transaction can align the receiving camera and display to complete the interaction. There are many similar visual-code based systems, (see for example, [41]), although the QR code is perhaps the most popular.

Smart phones have a range of capabilities for radio communication. Broadband cellular network technology (3G/4G) is very broadly deployed, but does not facilitate device-to-device communication for data sharing. Many phones support ad hoc Wi-Fi, but this can be at the expense of disabling wide-area connections, so it is not appropriate for fast, ad hoc communications at the current time. Bluetooth is commonly available in smart devices. Given its relatively high bandwidth, it has found good use in personal networks between peripherals. The more recent version, Bluetooth Low Energy (BLE), offers improved functionality for ad hoc communication between devices [65], removing the need for device pairing. Many modern smart devices can also read radio-frequency ID tags based on the NFC protocol. These can be used for ad hoc sharing between devices, but this is not as well explored as Bluetooth to date [13, 21]. Further radio-based technologies include ultra-wideband [1] and millimetre wave systems [70], such as 5G cellular networks. Whilst these technologies present promising solutions for low-energy, low-range, high bandwidth communications, they are not currently widely adopted, and presently very few smart devices contain the hardware required to operate in the required frequency ranges.

We will address the remaining modality, audio, in the following section.

Acoustic data transmission


As phones have evolved, their audio generation and processing abilities have expanded. For example, recent devices might have ‘always on’ listening to enable voice activation. Smart devices have full digital audio generation and sampling capabilities, but even older non-smart devices have microphones, speakers and the associated circuitry. The power consumption of using audio detection can be significantly lower than radio [74]. As a result, there exist many digital and analogue systems for generation, transport and presentation of audio.

Thus, it is sensible to use built-in microphones on a device as a sensing platform. While audio communication underpinned early long-distance communication through the use of modems over wired networks, it was somewhat overlooked as other wireless technologies proliferated in the 1990s [56]. In this section, we review some related technologies that have used acoustic data transmission, outlining the unique benefits that this technology presents to the user interface designer. Furthermore, we introduce Chirp, our implementation of acoustic data transmission.

Audible vs. near-ultrasonic

Acoustic data transmission technologies can be loosely divided into two categories based on their range in the acoustic spectrum, and thus their perceptibility to the human ear: audible (sub-15 kHz, audible to the majority of listeners) and near-ultrasonic (17–20 kHz, which are imperceptible to many adult listeners but can be detected by typical consumer microphones). Perhaps the first near-ultrasonic direct communication system was developed by Gerasimov and Bender [25]. By its nature, near-ultrasonic communication is not audible to most users, so its presence in an environment is not obvious. This makes it a good candidate for beacon-like or side-channel communication. It can be played on its own or embedded into another audio recording. Recognising that the greatest advantage of near-ultrasound communication was that no extra hardware was required, Ka et al. proposed a framework for TV second screen services [39]. Near-ultrasonic data over sound has also been used to communicate with wearable devices [68], transmit data from within shipping containers [35], share network credentials in an industrial IoT setting [24], and for wireless communication between everyday personal electronic devices and hearing aids [59]. In addition, it has been previously used for near-ultrasonic beacons, for example to control a smartphone museum guide [7]. There are obvious security concerns with inaudible data over sound: users may not be aware that data is being transmitted, and thus covert channels might be enabled [3, 12, 57]. However, because it is inaudible and can thus be present continuously, it has other potential such as measurement of the movement or location of devices (e.g. [14, 74, 80]).

In the audible range, there is a design choice to make the data obvious or not. One prominent audible code is dual-tone multi-frequency signalling (DTMF), still in common use for communication over voice calls. When choosing other audio designs, two important factors are throughput and robustness. However, these are in tension with the desire to have tones that sound pleasant to the human ear. The early work of Madhavapeddy et al. [55] suggests a number of encoding strategies. Using DTMF between devices 3 m apart, they achieved 20 bits per second (bps) at 0.005% error per symbol. Using on-off keying at multiple frequencies, they achieved 251 bps with 4.4 × 10− 5 error rate. The concurrent work of Lopes and Aguiar [49] similarly suggests various protocols. They achieved 125 bps using Johann Sebastian Bach’s Badinerie as the melody code. By using a harmonic frequency shift key, they achieved 800 bps with few errors, but the output would sound more like noise than anything resembling a melody.

Chirp: a software framework for acoustic transmission

Chirp [15] is a software framework that facilitates over-the-air acoustic transmission. Originating in research at University College London, it was first released as a near-field image-sharing mobile app [5], and now exists as a range of cross-platform SDKs, with both free and commercial licenses.

Chirp uses frequency-shift keying (FSK) [72, p.173] for its modulation scheme, due to its robustness to the multipath propagation present in real-world acoustics [38] in comparison with schemes such as phase-shift keying [72, p.168] or amplitude-shift keying [72, p.165]. For spectral efficiency, Chirp uses an M-ary FSK scheme, encoding input symbols as one of M unique frequencies. Each symbol is modulated by an amplitude envelope to prevent discontinuities, with a guard interval between symbols to reduce the impact of reflections and reverberation on the tone detection.

A Chirp payload is prefixed by a fixed set of preamble tones, to indicate the beginning of a message and to establish timing and synchronisation. It is suffixed by Reed-Solomon forward error correction (FEC) coding [66], enabling audio to be decoded when symbols are obscured due to background noise or reverberation. The transmission protocols can be configured for specific environments and acoustic channels, including both audible and near-ultrasonic bands. Both of these bands are supported by the majority of consumer audio devices that support sample rates of 44.1 kHz.

Chirp SDKs are designed to be integrated into client applications, and typically handle interaction with the operating system’s audio I/O layer. The client application provides the SDK with an array of bytes to transmit, which is encoded and played from the device’s loudspeaker. On the receiving device, audio is sampled from the microphone. When a Chirp signal is detected and decoded from the input stream, it is presented to the client application in a callback function.

Benefits of using sound to transmit data

In this section, we will briefly discuss the benefits of acoustic data transmission, in relation to the two alternative technologies included in the present study: QR and BLE. We selected the wireless technologies based on their suitability for the task, availability on popular mobile devices, and the type of interaction that they afford. QR is a readily available method for transferring contact details and vCards (being one of the default options to share a contact on Android devices). In addition, it can be used for many of the same applications as synchronous direct peer-to-peer mechanisms, such as authenticating users [46] and secure peer-to-peer data transfer [32, 64]. In terms of ubiquity, it is possible for any device with a camera (including all smart mobiles and tablets) to read QR codes, making it more readily available to users than less well-established technologies with specific hardware requirements, such as NFC. Much like Wi-Fi Direct, BLE is an RF-based technology that requires a device discovery stage, and both BLE and Wi-Fi Direct have been shown to have comparable durations for establishing a connection between devices [40]. As such, we considered these technologies to be very similar for our application in terms of the respective general benefits, at least within the scope of the present study (we note that Wi-Fi Direct has considerable benefits in terms of range and data rate, at the expense of power consumption; however, the data rate and range of BLE was sufficient for our task). For this reason, we chose to include only one of BLE or Wi-Fi Direct, and BLE was selected as the more widely readily available and better established technology (with Wi-Fi Direct unavailable on iOS devices, where only a proprietary equivalent exists [73]).

As with QR and BLE, acoustic data transmission has particular benefits that make it more or less suitable to specific applications. An overview of these are given in Table 1. From a technical perspective, as with BLE, acoustic data transmission is capable of one-to-one, two-way, and one-to-many (broadcast) transmissions. The former are useful for transmitting data objects between 2 users (such as contact details or URLs), but the latter presents a number of wide-reaching applications such as broadcasting status updates at transit stations, or providing information about collections in an art gallery. In addition, because it can utilise existing audio systems, data can be broadcast to radio listeners, TV viewers, or over public address systems by simply playing the data over the normal channels. Furthermore, because acoustic data transmission does not operate in the electromagnetic spectrum, the acoustic spectrum may be used in scenarios where restrictions on radio-frequency (RF) transmissions exist, such as in explosive or flammable environments.

Table 1 Outline of the benefits of acoustic data transmission (ADT) in relation to the technologies compared in the user study

As previously mentioned, acoustic data transmission can utilize devices’ existing hardware components and infrastructures where microphones and speakers are already built in. This makes it extremely cheap and easy to integrate in legacy equipment, compared to QR, which requires a camera, or BLE which requires technology-specific hardware. However, acoustic data transmission has relatively low data rates compared to RF-based technologies. Specifically, BLE has physical layer and application throughput data rates of 1 Mbps and \(\sim \) 240 kbps respectively [27]. The data rate for acoustic data transmission is dependent on the protocol and encoding scheme, which can be tuned for specific ranges and bit error rates. The standard Chirp audible and ultrasonic protocols have data rates of 100 bps and 200 bps respectively. However, for very near-field (sub 30 cm) transmission, up to 1 kbps is achievable using FSK modulation. The maximum amount of data represented by a QR code also varies depending on the encoding scheme. For binary encoding, it is possible to represent up to \(\sim \) 3 kb of data. It should be noted that it is not clear how this relates to data rate, as the transfer of data using QR codes requires a camera and code to be aligned; therefore, transmission duration will depend on a number of factors, including motor control of the user and the distance between the QR code and camera.

Acoustic data transmission requires both sender and receiver devices to be within hearing range of each other, and QR codes require line-of-sight, whereas BLE does not have either constraint. This can have important implications for privacy and security, depending on the use case. Acoustic data transmission may be made secure by limiting the usable range of the protocol; however, to fully protect against eavesdropping attacks, end-to-end encryption is required. For both acoustic data transmission and QR, this must be implemented at the application layer, whereas encryption is available at the link layer in BLE, at least for paired devices (albeit the protection against eavesdropping offered by BLE is limited [67]). In some instances, these technology-specific properties may be desirable, whereas in others, they may be considered as disadvantages. As such, it is clear that there is no ‘one-size-fits-all’ solution to wireless data transmission, and it is conceivable that the choice of technology will be dependent on a number of technical requirements.

In this section, we have considered the technical features of each of the wireless technologies. However, there exists little work on how these features relate to the user experience. For example, does having zero-config or pairing requirements actually provide for a more friction-less user experience? Does the inherent audible notification have any benefit to users in terms of feedback and control? Does the requirement to open a camera for reading QR codes or find a target Bluetooth device interrupt the user to such an extent that it impedes flow and causes frustration? These are the questions that we seek to address through our user study. In particular, we are interested in the advantages and disadvantages that are presented by each of the compared technologies, each of which are technically capable of achieving the same end goal, and how these ultimately impact on the user experience.


Given the benefits of exchanging data over sound as outlined in the previous section, we are interested in evaluating the user experience of the technology in a real-world application. In this section, we present the design and results from a user study based on a simple peer-to-peer contact-sharing task. In particular, we are interested in the effect of the respective technologies (BLE, QR and Chirp) on transaction time, ease of use, user preference, and overall experience.

Experiment design

Three contact sharing role-play scenarios were formulated for the study: one for each mode (BLE, QR and Chirp). For each scenario, participants (n = 12) worked in pairs, and were tasked with sending and receiving three contact details using a simple address-book application. The participants each took part in three sessions (one for each mode), giving 18 total ‘transactions’ per participant. Our approach followed a within-subjects design and used a complete Latin square Williams design [79] balanced for first-order carry-over residual effects, consisting of three treatments and three periods (3 × 3) in six sequences (ABC, ACB, BAC, CAB, BCA, CBA). Participants were randomised in equal numbers to the six possible sequences of treatments, and also randomly assigned a different partner during each session so that no participant was paired with the same partner twice. Each session took place in a closed meeting room containing a table and chairs or sofa.

Following each session participants completed a survey based on the Usability Metric for User Experience (UMUX) [23], using a four-item, 7-point Likert scale ranging from 1–7 (strongly disagree to strongly agree). The UMUX is designed for the subjective assessment of a system’s perceived usability, and was formulated as an improvement of the System Usability Scale (SUS) [10]. UMUX conforms to the ISO 9241-11 [36] definition of usability, which suggests that measures of usability should cover: users’ ability to complete a task using the system, the quality of the resulting output (effectiveness), the level of resources employed in performing the task (efficiency), and users’ subjective reaction towards the use of the system (satisfaction). Following discussions about the validity of the system [8, 11], the UMUX has been re-assessed and validated in various studies [6, 75], and an UMUX-LITE version has also been proposed [45]. Overall, the UMUX has proven a compact, valid and reliable usability component for measuring the user experience of a system or technology, making it an appropriate metric for our study.


Twelve participants (4 males, 8 females; aged 21–46, median age = 25) were recruited through a combination of email and social media invitations, and an online user research recruitment platform. As such, they had a range of backgrounds, and included students, researchers, and working professionals. All participants reported owning a smartphone and having experience using both Bluetooth and QR technologies. A power analysis was conducted using the simr package for R [29]. Based on 3 groups (for the 3 modes), an effect size of 0.5 and alpha = 0.05, simulations indicated a power for predicting mode of between 0.93 and 1.0 (95% confidence interval) with 12 participants. This gives 108 observations using a balanced repeated measures design (36 observations per mode, 6 transactions per pair, 6 unique pairs). This also allows for each participant to complete the task in each modality with a randomly assigned partner, whilst avoiding pairing the same participants more than once.

Implementation of the technologies

We developed a simple mobile demo application for sharing contact details via Bluetooth, QR codes and Chirp (Fig. 1). The application simulated an address book, giving users the option to view, share and receive contacts. All versions offered the same functionality to send and receive contacts. The application was installed on six mobile devices running Android version 7, which were provided to participants while performing the task. All user actions and network call were logged for analysis. The application was designed such that the same number of user actions were required to share a contact, regardless of the technology being used (see Tables 2 and 3).

Fig. 1
figure 1

Screen capture of the contact sharing application. Sending and listening for a contact (via Chirp)

Table 2 The work flow for sending a contact using each of the three technologies. Each process contained the same number of actions (2)
Table 3 The work flow for receiving a contact using each of the three technologies.


All participants were given verbal instructions on how to use the demo application before starting their first session. Participants were also provided with written instructions of the task and role play scenario at the start of each session. The facilitators configured the application before starting the sessions, to use either BLE, QR or Chirp, depending on the mode being tested in the given session.

Following each task, the participants completed the usability survey (Table 5). After completing all three sessions, semi-structured interviews were conducted, in which the participants were asked a consistent set of open-ended questions, prompting them to talk through their experience using the different technologies.


Transaction time and failure rate

For the quantitative analysis we investigated 2 metrics: (i) the number of attempts required to successfully share each contact and (ii) the time taken to share a contact. These metrics were derived from the data logged by the demo application (every user action and network event was recorded). The demo application was designed to ensure that sharing a contact required the same number of user actions for each technology for both sender and receiver (as shown in Tables 2 and 3). The time taken to share a contact is defined as the duration between the user actioning to share a contact (step 1 in Table 2) and the contact being received on the recipient’s device (step 2 in Table 3). The number of attempts is defined as the number of times a user actions ‘share contact’ before the contact is received on the recipient’s device. All contacts were successfully transferred for the 108 transactions. For QR, 100% of contacts were sent on the first attempt, whereas for Chirp and BLE, this was 94.4% and 83.3% respectively, as shown in Table 4.

Table 4 Percentage of successful transactions. All contacts were successfully shared via QR on the first attempt. Participants managed to share all contacts successfully within 2 attempts for all three technologies

In terms of time taken to successfully send a contact (duration), Chirp was fastest on average (2.4 s), followed by QR (6.3 s) and BLE (8.3 s), as shown in Fig. 2. We fitted a linear mixed effect regression model using the lme4 package for R [4], with duration as the response variable, fixed effects of mode, order and transaction number (with an interaction term between mode and transaction number), and random intercepts for the sender and receiver participants. Model assumptions of normality and homoskedasticity of the residuals were checked by visual inspection. We observed heteroskedasticity in the residuals of the fitted model (with the amount of variance and duration time being positively correlated, see Fig. 2), which was rectified by log transforming duration.

Fig. 2
figure 2

Time taken to share contact information for each technology

The effect of each factor was tested using a full factorial type III analysis of variance (ANOVA) with Satterthwaite’s degrees of freedom approximation from the lmerTest package [44]. We found a significant effect of mode (F(2,77.3) = 52.5, p < 0.001), transaction number (F(5,77.3) = 10.6, p < 0.001), and a small but significant interaction between mode and transaction number (F(2,76.9) = 4.1, p < 0.001). There was no effect of order on the duration, i.e. the transaction duration did not change as users’ familiarity with the application and task increased, as shown in Fig. 3.

Fig. 3
figure 3

Effect of order of mode presentation on the time taken to share a contact (mean and standard error bars)

The significant interaction between mode and transaction number means that it is not reasonable to analyse this model in terms of main effects [60]; therefore, we conducted a post hoc analysis of interaction contrasts between these factors using the phia package for R [20]. This showed a significant interaction for QR and BLE between transactions 1 and 2 (χ2(1) = 13.3, p < 0.01) and 1 and 5 (χ2(1) = 12.5, p < 0.01). There are also significant interactions for QR and Chirp between transaction 1 and each of 2 (χ2(1) = 14.1, p < 0.01), 3 (χ2(1) = 14.8, p < 0.01), 5 (χ2(1) = 15.9, p < 0.01), 6 (χ2(1) = 18.2, p < 0.001), and between transactions 4 and 5 (χ2(1) = 7.7, p < 0.05), and 4 and 6 (χ2(1) = 9.6, p < 0.05). These interactions are shown in Fig. 4. This highlights that the difference in transaction duration is dependent on whether the contact is being shared for the first time. When a set of contacts are shared, the first contact takes significantly longer than the subsequent contacts for QR. This effect is also observed, albeit to a lesser extent, for BLE, but is not the case for Chirp, where the transaction number has no effect on duration.

Fig. 4
figure 4

Effect of transaction number on the time taken to share a contact, by mode (mean and standard error bars)

UMUX survey

After finishing each session participants completed the four-question UMUX survey. The questions and their related usability components are given in Table 5.

Table 5 UMUX scale items from the survey presented to participants at the end of each session, and their corresponding usability components

Participants’ responses to the UMUX are summarised in Fig. 5. A Friedman rank sum test was performed, showing a significance difference between the responses for questions A, B and D: A (χ2(3, N = 36) = 14.1, p < 0.01); B (χ2(3, N = 36) = 18.0, p < 0.001); D (χ2(3, N = 36) = 25.6, p < 0.001). No significant difference were found between the responses for question C.

Fig. 5
figure 5

Participant responses to the UMUX following each session. Scale coding from 1 (strongly disagree) to 7 (strongly agree)

A pairwise Wilcoxon signed-rank test (with Bonferroni correction) was performed on the modes for questions A, B and D, showing a significant difference between the responses for BLE and both the QR and Chirp modes, as shown in Table 6.

Table 6 P values from a Wilcoxon test for the pairwise comparison between responses for each mode, by question. All values were adjusted for each question using the Bonferroni correction

Semi-structured interviews

In addition to the application data and survey, a set of open-ended questions were asked to participants during semi-structured interviews. The discussion points addressed participant preferences for the technologies, inviting them to explain the reasons for their choice, whether they experienced any difficulties completing the task (and if so, to describe the difficulties encountered), if they felt the data transfer technology had any impact on the task, and finally, participants were invited to discuss their thoughts on the sound of Chirp. The main questions used as discussion points are given in Table 7.

Table 7 Main questions and discussion points from the semi-structured interviews

The interviews were video recorded and transcribed in order to conduct a qualitative analysis on the data. We followed an inducted approach of thematic analysis, performed at the latent level [9]. We present and discuss the main themes that emerged from the analysis, providing relevant extracts from the interviews for each theme.

User effort required/ Ease of use (12). :

All participants commented on the effort required to complete the task with each of the three technologies, and felt the use of Bluetooth required significant effort due to the amount of steps required to complete the task (“you have to select the device that you want to transfer the data to, and there are always lots of people phones in real life on Bluetooth”), (“it was slow and manual”), (“more interaction was required than the other methods”).

Participants reported that in some instances multiple attempts had to be carried out due to connection issues (“we had to wait a while for the Bluetooth to come on because it just would not pair for a while, then we just went back and started again”), (“it was slow, it kept buffering, so I had to keep going back”), and commented on the poor responsiveness of the technology compared to QR and Chirp (“Bluetooth was slow and we were not sure of what was happening”). This resulted in frustration and feelings of dislike towards the technology (“it annoys me when I have to wait and see if the signal is strong enough, [wait] for the signal to go through”).

Three participants commented on the ease of use of QR and their familiarity with the technology (“I used it before and I feel it’s very easy to use, it just scans quite easily..I guess it’s just what I’m best used to”), (“I found QR a lot quicker and I’ve had experience with it before so it was easier for me”).

Although feeling that QR was the fastest among the technologies, 5 out of 12 participants reported that QR required some degree of effort with device proximity and alignment (“it’s annoying to have to match the camera to the QR code”), (“in the beginning there was a problem when we were too close and also we need two phones together, so it’s a bit more interaction”), (“I wasn’t sure at what angle I had to scan it”). Some participants also declared disliking the QR interaction, due to issues encountered in low lighting conditions (“I don’t really like using QR codes in the real world because if the lighting is not right or you just have trouble positioning the phones”), (“I think the QR code was fastest but I don’t like having to scan a code”).

Half of the participants (6 out of 12) agreed that Chirp was very easy to use and required minimal user effort for completing the task (“Chirp was quite easy, it’s just one step”), (“Chirp was still a lot better than QR code because it wasn’t as fiddly”), (“Chirp was really easy, you just had to click and it was done”), (“I found Chirp really easy to transfer”), (“Chirp is good in that you don’t have to move your phone and, I don’t know how far away you can be from the other person but, it seems like it would work quite well”). There were no reports of Chirp being difficult to use or requiring effort.

Perceived transfer speed (12). :

All participants based their preferred technology on the perceived speed of the data transfer (“when it was just done quickly it felt more efficient, it kind of felt better”), (“the faster it works the better it is”).

QR: (“QR [was my preferred method] because it was really fast”), (“QR code it’s quick and easy to use”).

Chirp: (“Chirp was the best because I didn’t have to wait for the signal to be strong enough, and I didn’t have to pair”), (“it was unexpected, in the sense that when I share and then the sound comes out and it’s done”), (“it was faster than Bluetooth and QR”), (“it was very very fast”), (“I had to press only one button and bang! it was done”), (“it was so instant, I was so impressed by it”).

However, it should be noted that user perception of the transaction time is subjective, and it is unclear whether all participants measured the time it took to complete the task from the moment they had started playing out the scenario, or if they rated transaction speed from the time they actively shared data.

Sound (12). :

Participants expressed mixed feelings about the sound emitted by Chirp. However, feelings of dislike were mostly associated to the loudness of the sound, with 7 participants expressing they felt the volume was too high (“it was a bit high”), (“it was quite loud”), (“it was too high pitched”), whereas 2 participants reported not liking the actual sound of the system (“I didn’t like the sound”), (“it was a very squishy sound”). However, those participants confirmed they wouldn’t have an issue with the sound if they were able to set the volume lower (“if it was a quieter sound then I feel it’d be fine”), (“it was fine, maybe the volume could be lower”).

Three participants mentioned they would like to have control over the sound (“I was wondering, can you control the volume?”), (“I would definitely want it with the sound. It could be slightly quieter. Maybe it’s great to have the option, but the sound is really cool”), (“if there was a change of sound with something a bit more pleasant it would be a bit better”), or having the option of an ultrasonic version of the method (“[I’d prefer a version with] no sound”).

Four participants made positive comments about the sound (“I thought it was really cool”), (“it’s a lovely sound”), (“it’s a really nice sound and you felt like something is happening”), (“I was fascinated by the sound”), (“it has a certain tonality”), (“it’s very unique”), (“it had a calming effect”).

Two participants reported the sound provided a feedback of the state of the task (“it’s going on”), (“you felt like something is happening”), and another participant felt the sound of the method would benefit hearing-impaired users (“I thought it would be good for people with hearing difficulties”).

Novelty of data over sound (3). :

Three participants expressed their interest for the novelty of the approach (“it was really cool that it was transferring data through sound”), (“I did like the idea of the Chirp [..] it’s something different from anything I’ve ever used before”), (“it was a completely new thing”).


We presented a first evaluation of user experience during acoustic data exchange, by developing a simple contact sharing application where users could exchange contacts via BLE, QR and our implementation of acoustic data transmission, Chirp. From observations, it emerged that participants generally considered transaction time to be the main factor for determining their preferred data transfer method, irrespective of the effort required. The differences in transaction time are limited by hard floors of the technologies. For Chirp, this is determined solely by the data rate. For BLE, it will be determined by the data rate, scanning period (which determines the speed with which devices are detected) and number of devices that the user has to choose from (which will be dependent on the number of active Bluetooth users within range). For QR, the factors are more complex, where a successfully transaction requires coordination and communication between users and physical effort to align devices.

This highlights that ‘technical’ specifications of technologies based on metrics such as data transfer speeds can not be solely relied upon as determinants for their effectiveness in terms of interaction times. For example, QR codes have the potential to provide the fastest means of transferring data (up to a limited payload size). However, in reality, the scanning process can take a notable amount of time and effort. In addition, whilst BLE was the slowest technology overall, there was considerable variability in the data, and some cases where the transaction times were comparable to QR and Chirp, with the fastest BLE transfer being \(\sim \) 1.5 s.

Perceived interaction time versus actual interaction time

Despite transaction time being a main factor in terms of user experience, there is a mismatch between the actual transaction time which reflects objective time (as defined for the quantitative analysis), and the time that users perceived the transaction to take, as indicated in the results of the UMUX survey. For example, QR was not necessarily faster for the whole transaction, due to having to align phones. However, due to the fact that the transaction seemed instantaneous as soon as the phones were aligned, it creates the perception of a fast transaction. This indicates that, although, users tended to find the alignment process frustrating, they did not consider it as part of the actual transaction of sharing a contact. In terms of user experience, it is the subjective experience of time rather than the actual time of completion recorded by the system that account for time.

Problematic time-related experiences do not occur when users are engaged in performing a task [69], but waiting and interruptions can cause negative experiences. Furthermore, a lack of information about the expected waiting time can lead to an increase in the perceived waiting time [2], which consequently affects a user’s perception of the time taken for the whole interaction. However, a user’s perception of the speed of an interaction (whether accurate or not) affects their enjoyment in performing the task [47]. Another factor to consider is user tolerance threshold, as introduced by [69], arising from a user’s expectation. If users experience a perceived duration under their tolerance threshold, then they will judge the interaction as fast, whereas if the perceived duration falls beyond the threshold, they will judge it as slow, independently from the actual duration time. As such, we also cannot rely on the measured time as a measure for user preference, but must consider the perceived interaction time when designing technologies for device-to-device communication that involve user interaction.

Effects of transaction number on interaction time

The pairs of participants transferred three contacts between each other, giving six transactions in total per session. Although, it was not prescribed to do so, participants tended to share all their 3 contacts at once, before receiving 3 from their partner. Given this pattern of interaction, we found a notable effect of transaction number (1–6) for both QR and BLE, but not for Chirp (Fig. 4). The first and fourth transaction in each session tended to take more time than the third and sixth respectively, indicating that for multiple transactions in the same direction, transaction time is reduced with each subsequent contact shared. This can be explained for QR, where the initial transaction required the receiving phone to be positioned accordingly (whereas for subsequent transactions the phones were typically already in position). For BLE, it is likely to be indicative of a usability factor, i.e. once the user knows they have to select the device to send to, the subsequent transactions are naturally faster. As such, we might take the best-case scenario transaction times by only looking at those for transactions 3 and 6. Here, there is actually little difference between modes. Nonetheless, the effect of transaction number highlights an important usability difference in terms of the ability of people to immediately use the technology, for which Chirp outperforms both BLE and QR. This is a notable finding, particularly considering that all participants reported previous experience using BLE and QR, but not Chirp. In addition, it highlights that for applications where multiple items are to be sent in succession, interaction times may eventually reflect the technology-specific data rates.

Transaction failures

Beyond transaction time, one of the major user experience issues of device-to-device communication is when things go wrong and a transaction attempt is unsuccessful. Although all 108 transactions were eventually successful for all three technologies, there were instances where multiple attempts were required. For BLE, this was typically due to the recipient’s device not being found during the scanning process, and the users deciding to ‘go back’ and re-scan for devices. This is an issue that regular users of Bluetooth will be familiar with. For Chirp, there were two instances where the sound was not correctly decoded by the recipient’s device. Finally, the fact that all QR codes were successfully transferred on the first attempt to ‘share’ should be interpreted with caution, because although the senders never had to ‘go back’ and reopen the QR code, the recipients did not always manage to successfully scan the codes on the first attempt.

Audibility and audio volume

Finally, we found high variance in user preference for the sound of Chirp. In this study, we used an audible version of Chirp, in order to investigate the effect of ‘hearing’ the transaction (and thus increasing the visibility of the technology) from a user perspective. It has been previously shown that using modalities such as sound to convey information in the design of mobile interfaces reduces short-term memory loads [28], potentially enhancing the user experience. However, the participants did not appear to directly equate the audible transactions to a more ‘informative’ experience. In general, there was no clear consensus on whether the sound was perceived to be a positive or negative element of the interaction; some participants enjoyed the sound and novelty of the technology, whereas others disliked the aesthetic. In addition, many users expressed a preference to have some control over the loudness.

It should be noted that, during the study, the volume of the devices was set to a medium level and kept consistent for all participants. For future studies, it might be more suitable to allow participants to adjust the volume, or ask participants to set a volume of their choice before performing the task. Chirp does not inherently rely on being audible, and as mentioned in Section 3, inaudible transmission is possible. Therefore, in a real-world application, it may be desirable to provide some level of user control over the encoding method or to give the option of transmitting data using audible or near-ultrasonic (inaudible) signals.

Conclusions and future work

In this paper, we provided an initial evaluation on the use of wireless data-sharing technologies for peer-to-peer information sharing. We measured and compared the benefits of three different data-sharing technologies: Bluetooth (BLE), QR codes and Chirp (acoustic data transmission), in terms of the time taken to complete a transaction and the user experience of doing so.

Our main findings identify perceived transaction time as a major factor in determining user preference for each of the technologies in question. We found that real-world transaction times were lowest for Chirp, followed by QR codes, and were considerably higher for BLE. In general, it follows that QR and Chirp offer significantly more positive user experiences than BLE for the basic contact-sharing task presented herein, as confirmed by user feedback.

Users expressed frustration at BLE due to pairing or device selection issues, and with QR for the physical coordination required to align devices and scan a code. In addition, users were divided on the aesthetic nature of the sound within Chirp’s implementation. However, all participants identified both QR and Chirp as easy to use and meeting the requirements of the technology for the task.

This work identifies that acoustic data transmission technologies such as Chirp constitute a promising alternative to the more common QR and BLE technologies. This is particularly so for tasks that involve ‘one-off’ transactions of data between devices such as mobile phones, computers, and tablets. However, further work is required to establish user preference for different data encoding schemes, each of which offer different sonic aesthetics, and to further understand the role that the sound of audible data transmission plays in the overall user experience.


  1. Aiello GR, Rogerson GD (2003) Ultra-wideband wireless systems. IEEE Microwave Magazine 4(2):36–47

    Article  Google Scholar 

  2. Antonides G, Verhoef PC, Van Aalst M (2002) Consumer perception and evaluation of waiting time: a field experiment. J Consum Psychol 12(3):193–202

    Article  Google Scholar 

  3. Arp D, Quiring E, Wressnegger C Rieck K (2017) Privacy threats through ultrasonic side channels on mobile devices. In: IEEE European Symposium on Security and Privacy, Paris, France, pp 35–47

  4. Bates D, Machler M, Bolker B, Walker S (2015) Fitting linear mixed-effects models using lme4. Journal of Statistical Software Articles 67(1):1–48.

    Article  Google Scholar 

  5. BBC News (2012) Chirp app sends smartphone data via ‘digital birdsong., last accessed: 18 February 2019

  6. Berkman MI, Karahoca D (2016) Re-assessing the usability metric for user experience (UMUX) scale. J Usability Stud 11(3):89–109

    Google Scholar 

  7. Bihler P, Imhoff P, Cremers AB (2011) Smartguide–a smartphone museum guide with ultrasound control. Procedia Computer Science 5:586–592

    Article  Google Scholar 

  8. Bosley JJ (2013) Creating a short usability metric for user experience (UMUX) scale. Interact Comput 25 (4):317–319

    Article  Google Scholar 

  9. Braun V, Clarke V (2006) Using thematic analysis in psychology. Qualitative Research in Psychology 3 (2):77–101

    Article  Google Scholar 

  10. Brooke J, et al. (1996) SUS-a quick and dirty usability scale. Usability Evaluation in Industry 189(194):4–7

    Google Scholar 

  11. Cairns P (2013) A commentary on short questionnaires for assessing usability. Interact Comput 25(4):312–316

    MathSciNet  Article  Google Scholar 

  12. Carrara B, Adams C (2014) On acoustic covert channels between air-gapped systems. In: Proccedings of the International Symposium on Foundations and Practice of Security, pp 3–16

  13. Chen KM, Liou YC, Chen M (2011) NFC+: NFC-Assisted media sharing for mobile devices. In: Proceedings of the 13th International Conference on Ubiquitous Computing, Beijing, pp 575–576.

  14. Chen KY, Ashbrook D, Goel M, Lee SH, Patel S (2014) AirLink: sharing files between multiple devices using in-air gestures. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, Washington, pp 565–569.

  15. Chirp (2011) Chirp., last accessed: 12 February 2019

  16. Chong MK, Gellersen H (2010) Classification of spontaneous device association from a usability perspective In: Proceedings of the 2nd International Workshop on Security and Privacy in Spontaneous Interaction and Mobile Device Use (IWSSI/SPMU’2010), Helsinki, Finland

  17. Chong MK, Gellersen H (2012) Usability classification for spontaneous device association. Pers Ubiquit Comput 16(1):77–89.

    Article  Google Scholar 

  18. Chong MK, Mayrhofer R, Gellersen H (2014) A survey of user interaction for spontaneous device association. ACM Computing Surveys (CSUR) 47(1):8

    Article  Google Scholar 

  19. Clawson J, Voida A, Patel N, Lyons K (2008) Mobiphos: a collocated-synchronous mobile photo sharing application. In: Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services, Amsterdam, The Netherlands, pp 187–195

  20. De Rosario–Martinez H (2015) phia: post-hoc interaction analysis. R package version 0.2-1

  21. Dodson B, Lam MS (2012) Micro-interactions with nfc-enabled mobile phones. In: Zhang JY, Wilkiewicz J, Nahapetian A (eds) Mobile Computing, Applications, and Services, Springer, Berlin, Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, pp 118–136

  22. Echtler F (2016) Calendarcast: Setup-free, privacy-preserving, localized sharing of appointment data. In: Proceedings of the 2016 Conference on Human Factors in Computing Systems, San Jose, pp 1374–1378.

  23. Finstad K (2010) The usability metric for user experience. Interact Comput 22(5):323–327

    Article  Google Scholar 

  24. Fürst J, Aljarrah M, Bonnet P (2016)

  25. Gerasimov V, Bender W (2000) Things that talk: using sound for device-to-device and device-to-human communication. IBM Syst J 39(3.4):530–546.

    Article  Google Scholar 

  26. Goel M, Lee B, Islam Aumi MT, Patel S, Borriello G, Hibino S, Begole B (2014) Surfacelink: using inertial and acoustic sensing to enable multi-device interaction on a surface. In: Proceedings of the 32nd annual ACM conference on Human factors in computing systems, ACM, pp 1387–1396

  27. Gomez C, Oller J, Paradells J (2012) Overview and evaluation of bluetooth low energy: an emerging low-power wireless technology. Sensors 12(9):11734–11753

    Article  Google Scholar 

  28. Gong J, Tarasewich P, et al. (2004) Guidelines for handheld mobile device interface design. In: Proceedings of DSI 2004 Annual Meeting, pp 3751–3756

  29. Green P, MacLeod CJ (2016) SIMR: an R package for power analysis of generalized linear mixed models by simulation. Methods Ecol Evol 7(4):493–498

    Article  Google Scholar 

  30. Greenberg S, Marquardt N, Ballendat T, Diaz-Marino R, Wang M (2011) Proxemic interactions: the new ubicomp? Interactions 18(1):42–50.

    Article  Google Scholar 

  31. Hinckley K (2003) Synchronous gestures for multiple persons and computers. In: Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, Vancouver, pp 149–158.

  32. Hlobaz A, Podlaski K, Milczarski P (2014) Applications of QR codes in secure mobile data exchange. In: Proceedings of the International Conference on Computer Networks, Shanghai, pp 277–286

  33. Holmquist LE, Mattern F, Schiele B, Alahuhta P, Beigl M, Gellersen HW (2001) Smart-Its friends: a technique for users to easily establish connections between smart artefacts. In: Ubicomp 2001 Ubiquitous Computing, Springer, Berlin, Lecture Notes in Computer Science, pp 116–122

  34. Hornecker E, Marshall P, Dalton NS, Rogers Y (2008) Collaboration and interference: awareness with mice or touch input. In: Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work. ACM, San Diego, pp 167–176

  35. Hosman T, Yeary M, Antonio JK, Hobbs B (2010) Multi-tone FSK for ultrasonic communication. in: Proceedings of the Instrumentation and Measurement Technology Conference (I2MTC), Austin, pp 1424–1429

  36. ISO/IEC (1998) ISO/IEC 9241-11. Ergonomic requirements for office work with visual display terminals (VDTs)

  37. ISO/IEC (2015) ISO/IEC 18004:2015 - Information technology – automatic identification and data capture techniques – QR Code bar code symbology specification

  38. Istepanian RS, Stojanovic M (2002) Underwater acoustic digital signal processing and communication systems. Springer, Berlin

    Book  Google Scholar 

  39. Ka S, Kim TH, Ha JY, Lim SH, Shin SC, Choi JW, Kwak C, Choi S (2016) Near-ultrasound communication for TV’s 2nd screen services. In: Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking. ACM, New York, pp 42–54

  40. Kanaoka R, Tobe Y (2014) Design of a data transfer system on smartphones using Wi-Fi direct and accelerometers. In: IEEE 3Rd global conference on consumer electronics (GCCE), Las Vegas, USA, pp 71–75

  41. Kato H, Tan KT (2007) Pervasive 2d barcodes for camera phone applications. IEEE Pervasive Computing 6(4):76–85

    Article  Google Scholar 

  42. Kravets RH (2012) Enabling social interactions off the grid. IEEE Pervasive Computing 11(2):8–11.

    Article  Google Scholar 

  43. Kray C, Rohs M, Hook J, Kratz S (2009) Bridging the gap between the Kodak and the Flickr generations: a novel interaction technique for collocated photo sharing. International Journal of Human-Computer Studies 67 (12):1060–1072

    Article  Google Scholar 

  44. Kuznetsova A, Brockhoff PB, Christensen RHB (2016) lmerTest: tests for random and fixed effects for linear mixed effect models (lmer objects of lme4 package). R package version 2.0.33

  45. Lewis JR, Utesch BS, Maher DE (2013) UMUX-LITE: when there’s no time for the SUS. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Paris, pp 2099–2102

  46. Liao KC, Lee WH (2010) A novel user authentication scheme based on QR-code. Journal of Networks 5 (8):937

    Article  Google Scholar 

  47. Liikkanen LA, Gómez PG (2013) Designing interactive systems for the experience of time. In: Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces. ACM, Newcastle, pp 146–155.

  48. Liu P, Yi SP (2017) The effects of extend compatibility and use context on NFC mobile payment adoption intention. In: Advances in Human Factors and System Interactions, Springer, pp 57–68

  49. Lopes CV, Aguiar PMQ (2003) Acoustic modems for ubiquitous computing. IEEE Pervasive Computing 2(3):62–71.

    Article  Google Scholar 

  50. Lucero A, Keränen J, Jokela T (2010a) Social and spatial interactions: shared co-located mobile phone use. In: CHI’10 Extended Abstracts on Human Factors in Computing Systems, Atlanta, pp 3223–3228

  51. Lucero A, KerÃnen J, Korhonen H (2010b) Collaborative use of mobile phones for brainstorming. In: Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services. ACM, Lisbon, pp 337–340.

  52. Lucero A, Holopainen J, Jokela T (2011) Pass-them-around: collaborative use of mobile phones for photo sharing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Vancouver, pp 1787–1796.

  53. Lucero A, Jokela T, Palin A, Aaltonen V, Nikara J (2012) EasyGroups: binding mobile devices for collaborative interactions.

  54. Lundgren S, Fischer JE, Reeves S, Torgersson O (2015) Designing mobile experiences for collocated interactions. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, pp 496–507

  55. Madhavapeddy A, Scott D, Sharp R (2003) Context-aware computing with sound. In: Dey AK, Schmidt A, McCarthy JF (eds) Ubicomp 2003: Ubiquitous Computing. Springer, Berlin, pp 315–332

  56. Madhavapeddy A, Sharp R, Scott D, Tse A (2005) Audio networking: the forgotten wireless technology. IEEE Pervasive Computing 4(3):55–60

    Article  Google Scholar 

  57. Mavroudis V, Hao S, Fratantonio Y, Maggi F, Kruegel C, Vigna G (2017) On the privacy and security of the ultrasound ecosystem. Proceedings on Privacy Enhancing Technologies 2017(2):95–112

    Article  Google Scholar 

  58. Mayrhofer R, Gellersen H (2009) Shake well before use: intuitive and secure pairing of mobile devices. IEEE Trans Mob Comput 8(6):792–806.

    Article  Google Scholar 

  59. Miegel J, Branch P, Blamey P (2018) Wireless communication between personal electronic devices and hearing aids using high frequency audio and ultrasound. The Journal of the Acoustical Society of America 144 (4):2598–2604

    Article  Google Scholar 

  60. Nelder J (1977) A reformulation of linear models. Journal of the Royal Statistical Society Series A (General) pp 48–77

  61. Nielsen J (1994a) Enhancing the explanatory power of usability heuristics. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Boston, Massachusetts, USA, pp 152–158

  62. Nielsen J (1994b) Ten usability heuristics.

  63. Nintendo (2011) What is StreetPass?, last accessed: 14 September 2018

  64. Nseir S, Hirzallah N, Aqel M (2013) A secure mobile payment system using QR code. In: Proceedings of the 5th International Conference on Computer Science and Information Technology, pp 111–114

  65. Raza S, Misra P, He Z, Voigt T (2015) Bluetooth smart: an enabling technology for the Internet of Things. In: IEEE 11th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), pp 155–162,

  66. Reed IS, Solomon G (1960) Polynomial codes over certain finite fields. Journal of the Society for Industrial and Applied Mathematics 8(2):300–304

    MathSciNet  MATH  Article  Google Scholar 

  67. Ryan M (2013) Bluetooth: With low energy comes low security. In: 7th {USENIX} Workshop on Offensive Technologies, Washington, USA

  68. Santagati GE, Melodia T (2015) U-wear: Software-defined ultrasonic networking for wearable devices. In: Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services, Florence pp 241–256

  69. Seow SC (2008) Perception and tolerance. In: Designing and Engineering time: The Psychology of Time Perception in Software, Addison-wesley professional, Chap 2, pp 15–32

  70. Shokri-Ghadikolaei H, Fischione C, Fodor G, Popovski P, Zorzi M (2015) Millimeter wave cellular networks: a MAC layer perspective. IEEE Trans Commun 63(10):3437–3458

    Article  Google Scholar 

  71. Soriente C, Tsudik G, Uzun E (2008) HAPADEP: Human-assisted pure audio device pairing Wu TC, Lei CL, Rijmen V, Lee DT (eds)

  72. Stüber GL (2002) Principles of mobile communication. Kluwer, New York

    Google Scholar 

  73. Stute M, Kreitschmann D, Hollick M (2018) One billion apples’ secret sauce: recipe for the apple wireless direct link ad hoc protocol. In: Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, New Delhi, pp 529–543

  74. Sun Z, Purohit A, Bose R, Zhang P (2013) Spartacus: Spatially-aware interaction for mobile devices through energy-efficient audio sensing. In: Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, Taipei, pp 263–276.

  75. Tomlinson BJ, Noah BE, Walker BN (2018) Buzz: an auditory interface user experience scale. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, Canada

  76. Uzun E, Karvonen K, Asokan N (2007) Usability analysis of secure pairing methods. In: Dietrich S, Dhamija R (eds) Financial cryptography and data security, Springer, Berlin, Lecture Notes in Computer Science, pp 307–324

  77. Wang K, Yang Z, Zhou Z, Liu Y, Ni L (2015) Ambient rendezvous: energy-efficient neighbor discovery via acoustic sensing. In: IEEE Conference on computer communications, Hong Kong, pp 2704–2712

  78. Want R, Schilit BN, Jenson S (2015) Enabling the internet of things. Computer 48(1):28–35.

    Article  Google Scholar 

  79. Williams E (1949) Experimental designs balanced for the estimation of residual effects of treatments. Aust J Chem 2(2):149–168

    MathSciNet  Article  Google Scholar 

  80. Zhang H, Du W, Zhou P, Li M, Mohapatra P (2017) An acoustic-based encounter profiling system. IEEE Trans Mob Comput 17(8):1750–1763

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Adib Mehrabi.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mehrabi, A., Mazzoni, A., Jones, D. et al. Evaluating the user experience of acoustic data transmission. Pers Ubiquit Comput 24, 655–668 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Acoustic data transmission
  • Mobile
  • User experience
  • Wireless data transfer
  • Audio
  • Peer-to-peer connectivity