Journal on Multimodal User Interfaces

, Volume 9, Issue 3, pp 223–229

Synchronizing multimodal recordings using audio-to-audio alignment

An application of acoustic fingerprinting to facilitate music interaction research
Original Paper

Abstract

Research on the interaction between movement and music often involves analysis of multi-track audio, video streams and sensor data. To facilitate such research a framework is presented here that allows synchronization of multimodal data. A low cost approach is proposed to synchronize streams by embedding ambient audio into each data-stream. This effectively reduces the synchronization problem to audio-to-audio alignment. As a part of the framework a robust, computationally efficient audio-to-audio alignment algorithm is presented for reliable synchronization of embedded audio streams of varying quality. The algorithm uses audio fingerprinting techniques to measure offsets. It also identifies drift and dropped samples, which makes it possible to find a synchronization solution under such circumstances as well. The framework is evaluated with synthetic signals and a case study, showing millisecond accurate synchronization.

Keywords

Multimodal data synchronization Audio fingerprinting  Audio-to-audio-alignment Music performance research Digital signal processing 

Supplementary material

12193_2015_196_MOESM1_ESM.7z (74.9 mb)
Supplementary material 1 (7z 76651 KB)

Copyright information

© OpenInterface Association 2015

Authors and Affiliations

  1. 1.Department of Musicology, Institute for Psychoacoustics and Electronic Music (IPEM)Ghent UniversityGhentBelgium

Personalised recommendations