Processing of Short Auditory Stimuli: The Rapid Audio Sequential Presentation Paradigm (RASP)
Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d′, remained above chance for the shortest sounds tested (2 ms); d′s above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the “rapid sequential visual presentation” paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d′ remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.
KeywordsSound Source Fixed Duration Presentation Rate Musical Instrument Target Sound
This work was supported by the Fondation Pierre Gilles de Gennes pour la Recherche.
- Goto M, Hashiguchi H, Nishimura T, Oka R (2003) RWC music database: music genre database and musical instrument sound database. In: 4th international conference on Music Information Retrieval, Baltimore, 2003Google Scholar
- Macmillan NA, Creelman CD (2005) Detection theory: a user’s guide, 2nd edn. Lawrence Erlbaum Associates, MahwahGoogle Scholar