Melody recognition entails the encoding of pitch intervals between successive notes. While it has been shown that a whole melodic sequence is better encoded than the sum of its constituent intervals, the underlying reasons have remained opaque. Here, we compared listeners’ accuracy in encoding the relative pitch distance between two notes (for example, C, E) of an interval to listeners accuracy under the following three modifications: (1) doubling the duration of each note (C – E –), (2) repetition of each note (C, C, E, E), and (3) adding a preceding note (G, C, E). Repeating (2) or adding an extra note (3) improved encoding of relative pitch distance when the melodic sequences were transposed to other keys, but lengthening the duration (1) did not improve encoding relative to the standard two-note interval sequences. Crucially, encoding accuracy was higher with the four-note sequences than with long two-note sequences despite the fact that sensory (pitch) information was held constant. We interpret the results to show that re-forming the Gestalts of two-note intervals into two-note “melodies” results in more accurate encoding of relational pitch information due to a richer structural context in which to embed the interval.
Music Melody Gestalt Interval Pitch Recognition
This is a preview of subscription content, log in to check access.
The authors would like to thank Cory Kendrick, Kevin Miller, and Samuel Lloyd for their great help on data collection. We thank Bodo Winter for providing the helpful tutorial on the linear mixed effects modeling in R and his advice on the analysis for our study via personal communication with us. Yune-Sang Lee’s special thanks go to Prof. Jay Hull for his enormous and unconditional help on other statistical analyses. Lastly, we are truly grateful to the reviewing editor—Dr. Bob McMurray—and two anonymous reviewers for their great comments and suggestions.
Bigand, E., Poulin, B., Tillmann, B., Madurell, F., & D'Adamo, D. A. (2003). Sensory versus cognitive components in harmonic priming. Journal of Experimental Psychology-Human Perception and Performance, 29(1), 159–171. doi:10.1037/0096-1522.214.171.124PubMedCrossRefGoogle Scholar
Collins, T., Tillmann, B., Barrett, F. S., Delbe, C., & Janata, P. (2014). A combined model of sensory and cognitive representations underlying tonal expectations in music: From audio signals to behavior. Psychological Review, 121(1), 33–65. doi:10.1037/a0034695PubMedCrossRefGoogle Scholar
Cuddy, L. L., & Cohen, A. J. (1976). Recognition of transposed melodic sequences. Quarterly Journal of Experimental Psychology, 28(May), 255–270.CrossRefGoogle Scholar
Dowling, W. J. (1986). Context effects on melody recognition - scale-step versus interval representations. Music Perception, 3(3), 281–296.CrossRefGoogle Scholar
Dowling, W. J., Bartlett, J. C., Halpern, A. R., & Andrews, M. W. (2008). Melody recognition at fast and slow tempos: Effects of age, experience, and familiarity. Perception & Psychophysics, 70(3), 496–502. doi:10.3758/Pp.70.3.496CrossRefGoogle Scholar
Edworthy, J. (1985). Interval and contour in melody processing. Music Perception, 2(3), 375–388.CrossRefGoogle Scholar
Enns, J. T., & Prinzmetal, W. (1984). The role of redundancy in the object-line effect. Perception & Psychophysics, 35(1), 22–32.CrossRefGoogle Scholar
Trainor, L. J., McDonald, K. L., & Alain, C. (2002). Automatic and controlled processing of melodic contour and interval information measured by electrical brain activity. Journal of Cognitive Neuroscience, 14(3), 430–442. doi:10.1162/089892902317361949PubMedCrossRefGoogle Scholar
Warrier, C. M., & Zatorre, R. J. (2002). Influence of tonal context and timbral variation on perception of pitch. [Research Support, Non-U.S. Gov't]. Perception & Psychophysics, 64(2), 198–207.CrossRefGoogle Scholar
Williams, A., & Weisstein, N. (1978). Line segments are perceived better in a coherent context than alone - object-line effect in visual-perception. Memory & Cognition, 6(2), 85–90.CrossRefGoogle Scholar